EPYC 7763 LLVM Clang Compiler Tests

AMD EPYC 7763 64-Core testing with a Supermicro H12SSL-i v1.01 (2.0 BIOS) and ASPEED on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2104140-IB-EPYC7763L31
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 3 Tests
AV1 4 Tests
C++ Boost Tests 2 Tests
C/C++ Compiler Tests 16 Tests
CPU Massive 16 Tests
Creator Workloads 22 Tests
Cryptography 3 Tests
Encoding 10 Tests
Finance 2 Tests
Game Development 2 Tests
HPC - High Performance Computing 4 Tests
Imaging 6 Tests
Machine Learning 2 Tests
Multi-Core 14 Tests
NVIDIA GPU Compute 2 Tests
Raytracing 3 Tests
Renderers 3 Tests
Scientific Computing 2 Tests
Server 2 Tests
Server CPU Tests 8 Tests
Single-Threaded 4 Tests
Texture Compression 2 Tests
Video Encoding 7 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
Clang 12.0
April 10 2021
  8 Hours, 55 Minutes
Clang 11.0
April 11 2021
  7 Hours, 36 Minutes
Clang 12.0 LTO
April 12 2021
  23 Minutes
GCC 9.3
April 12 2021
  7 Hours, 23 Minutes
GCC 10.3
April 13 2021
  7 Hours, 10 Minutes
GCC 11.0.1
April 13 2021
  4 Hours, 42 Minutes
AMD AOCC 3.0
April 14 2021
  7 Hours, 37 Minutes
Invert Hiding All Results Option
  6 Hours, 15 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


EPYC 7763 LLVM Clang Compiler TestsOpenBenchmarking.orgPhoronix Test SuiteAMD EPYC 7763 64-Core @ 2.45GHz (64 Cores / 128 Threads)Supermicro H12SSL-i v1.01 (2.0 BIOS)AMD Starship/Matisse126GB3841GB Micron_9300_MTFDHAL3T8TDPASPEED2 x Broadcom NetXtreme BCM5720 2-port PCIeUbuntu 20.045.12.0-051200rc6daily20210408-generic (x86_64) 20210407GNOME Shell 3.36.4X Server 1.20.8Clang 12.0.0-++20210409092622+fa0971b87fb2-1~exp1~20210409193326.73Clang 11.0.0-2~ubuntu20.04.1GCC 9.3.0GCC 10.3.0GCC 11.0.1 20210413Clang 12.0.0ext41024x768ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelDesktopDisplay ServerCompilersFile-SystemScreen ResolutionEPYC 7763 LLVM Clang Compiler Tests PerformanceSystem Logs- Transparent Huge Pages: madvise- Clang 12.0: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"- Clang 11.0: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"- Clang 12.0 LTO: CXXFLAGS="-O3 -march=native -flto" CFLAGS="-O3 -march=native -flto"- GCC 9.3: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"- GCC 10.3: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"- GCC 11.0.1: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"- AMD AOCC 3.0: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"- Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0xa001119- Python 3.8.2- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected - GCC 9.3: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - GCC 10.3: --disable-multilib --enable-checking=release- GCC 11.0.1: --disable-multilib --enable-checking=release- AMD AOCC 3.0: Optimized build with assertions; Default target: x86_64-unknown-linux-gnu; Host CPU: (unknown)

Clang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0Result OverviewPhoronix Test Suite100%102%104%106%108%LZ4 CompressionTimed MrBayes AnalysisLZ4 Compression9 - Compression SpeedP.P.A3 - Compression Speed

EPYC 7763 LLVM Clang Compiler Testsonednn: Deconvolution Batch shapes_1d - f32 - CPUviennacl: CPU BLAS - dCOPYviennacl: CPU BLAS - dAXPYetcpak: DXT1viennacl: CPU BLAS - dGEMM-NNviennacl: CPU BLAS - dGEMM-TNdav1d: Chimera 1080p 10-bitgraphics-magick: Resizingbotan: ChaCha20Poly1305 - Decryptc-ray: Total Time - 4K, 16 Rays Per Pixelbotan: ChaCha20Poly1305onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUlibraw: Post-Processing Benchmarkfinancebench: Bonds OpenMPonednn: Convolution Batch Shapes Auto - f32 - CPUviennacl: CPU BLAS - dGEMM-NTviennacl: CPU BLAS - dDOTsvt-av1: Enc Mode 0 - 1080ptoybrot: C++ Threadsetcpak: ETC1toybrot: TBBtoybrot: OpenMPtoybrot: C++ Tasksviennacl: CPU BLAS - dGEMM-TTscimark2: Sparse Matrix Multiplybotan: Blowfishgraphics-magick: Sharpenonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUgraphics-magick: HWB Color Spaceonednn: IP Shapes 3D - u8s8f32 - CPUfinancebench: Repo OpenMPsvt-av1: Enc Mode 4 - 1080ponednn: Convolution Batch Shapes Auto - u8s8f32 - CPUviennacl: CPU BLAS - dGEMV-Tsvt-av1: Enc Mode 8 - 1080pcoremark: CoreMark Size 666 - Iterations Per Secondastcenc: Mediumonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUfftw: Float + SSE - 1D FFT Size 2048liquid-dsp: 128 - 256 - 57onednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUscimark2: Jacobi Successive Over-Relaxationgraphics-magick: Noise-Gaussianonnx: shufflenet-v2-10 - OpenMP CPUbotan: Blowfish - Decryptetcpak: ETC2botan: AES-256astcenc: Thoroughencode-flac: WAV To FLACbotan: AES-256 - Decryptencode-mp3: WAV To MP3tscp: AI Chess Performancegraphics-magick: Enhancedngspice: C2670simdjson: PartialTweetsquantlib: simdjson: DistinctUserIDsimdjson: LargeRandonnx: yolov4 - OpenMP CPUavifenc: 6, Losslessfftw: Float + SSE - 1D FFT Size 4096onednn: IP Shapes 1D - u8s8f32 - CPUfftw: Stock - 1D FFT Size 32botan: Twofishfftw: Float + SSE - 1D FFT Size 32onednn: IP Shapes 1D - f32 - CPUwebp: Quality 100, Highest Compressiononnx: fcn-resnet101-11 - OpenMP CPUgraphics-magick: Swirlliquid-dsp: 1 - 256 - 57botan: Twofish - Decryptonednn: IP Shapes 3D - f32 - CPUfftw: Stock - 1D FFT Size 4096fftw: Stock - 2D FFT Size 1024securemark: SecureMark-TLSaom-av1: Speed 9 Realtime - Bosphorus 1080pwebp2: Quality 100, Compression Effort 5fftw: Stock - 1D FFT Size 1024pgbench: 100 - 100 - Read Write - Average Latencypgbench: 100 - 100 - Read Writefftw: Stock - 1D FFT Size 2048avifenc: 2liquid-dsp: 32 - 256 - 57fftw: Float + SSE - 2D FFT Size 4096scimark2: Fast Fourier Transformaom-av1: Speed 8 Realtime - Bosphorus 1080pavifenc: 6onednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUavifenc: 0avifenc: 10aom-av1: Speed 6 Realtime - Bosphorus 1080pwebp2: Quality 100, Lossless Compressionwebp2: Quality 95, Compression Effort 7onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUwebp2: Quality 75, Compression Effort 7compress-lz4: 9 - Compression Speedfftw: Stock - 2D FFT Size 2048mrbayes: Primate Phylogeny Analysisgraphics-magick: Rotatesvt-hevc: 10 - Bosphorus 1080pngspice: C7552aom-av1: Speed 4 Two-Pass - Bosphorus 1080psvt-hevc: 7 - Bosphorus 1080pbotan: KASUMIpovray: Trace Timefftw: Float + SSE - 1D FFT Size 1024avifenc: 10, Losslesssvt-hevc: 1 - Bosphorus 1080ppgbench: 100 - 250 - Read Writejpegxl: PNG - 7pgbench: 100 - 250 - Read Write - Average Latencyjpegxl: PNG - 5scimark2: Monte Carloaom-av1: Speed 6 Realtime - Bosphorus 4Kwebp2: Defaultaom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 6 Two-Pass - Bosphorus 4Kx265: Bosphorus 4Kaom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 0 Two-Pass - Bosphorus 1080ptachyon: Total Timecompress-lz4: 3 - Compression Speedsvt-vp9: Visual Quality Optimized - Bosphorus 1080pliquid-dsp: 64 - 256 - 57pgbench: 100 - 1 - Read Onlywebp: Quality 100, Losslesssvt-vp9: VMAF Optimized - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080ppgbench: 100 - 1 - Read Only - Average Latencyaom-av1: Speed 0 Two-Pass - Bosphorus 4Kbotan: KASUMI - Decryptwebp: Defaultscimark2: Dense LU Matrix Factorizationdav1d: Chimera 1080pbotan: CAST-256 - Decryptbotan: CAST-256scimark2: Compositegcrypt: fftw: Stock - 2D FFT Size 4096astcenc: Exhaustivewebp: Quality 100, Lossless, Highest Compressionaom-av1: Speed 4 Two-Pass - Bosphorus 4Kwebp: Quality 100fftw: Float + SSE - 2D FFT Size 2048simdjson: Kostyaaom-av1: Speed 6 Two-Pass - Bosphorus 1080pjpegxl: JPEG - 8pgbench: 100 - 100 - Read Only - Average Latencypgbench: 100 - 100 - Read Onlypgbench: 100 - 1 - Read Writex265: Bosphorus 1080ppgbench: 100 - 1 - Read Write - Average Latencycompress-lz4: 9 - Decompression Speedcompress-lz4: 3 - Decompression Speedencode-opus: WAV To Opus Encodejpegxl: PNG - 8dav1d: Summer Nature 4Kpgbench: 100 - 250 - Read Onlypgbench: 100 - 250 - Read Only - Average Latencydav1d: Summer Nature 1080ponednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUfftw: Float + SSE - 2D FFT Size 1024jpegxl: JPEG - 5jpegxl: JPEG - 7onnx: super-resolution-10 - OpenMP CPUonnx: bertsquad-10 - OpenMP CPUviennacl: CPU BLAS - dGEMV-Nviennacl: CPU BLAS - sDOTviennacl: CPU BLAS - sAXPYviennacl: CPU BLAS - sCOPYClang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.01.444256048782718.52548.651.9308.322136843.40415.870850.4961.1725841.7851596.8671871.2213265.78190.1837220284.64267807507743773.04280.22380.0546142.367970.4919406050.71012433246.83723911.4742.03606626118.0671785466.2839694.00580.313689512543643766667593.972590.182597.4811785.504579904351.284202.0854659.3386.76477.8544682.4558.25615709661076118.8704.602653.84.620.8433325.220454281.0750713333315.409156491.077016.309112199355663000321.1903.285079862.09088.3265204103.176.690108051.607623191046725.175156483333322797363.8588.789.5101307.491302.7047.8843.36126.85374.035207.0081305.10109.52548.507789.989.116712643.5895.9567.10345.3082.6449.296503505.74641.095668412.154.43174.27675.1317.222.73938.118.9930.3233.390.5316.046852.07372.4930706333332431019.016487.43488.230.0410.2184.2291.3318848.401198.22133.048132.8203190.62236.9246744.118.993638.4494.872.199319352.7522.1328.130.0941069022328174.000.30513926.513911.57.5670.82541.5610712090.2341244.110.7797763623966.6666.38445649869.14343574711.45757187710431872.75983.688.3184.192034840.63715.599848.2361.1514038.7151900.4348960.84116979.39330.1816395205.06562477029683684.04590.37319.2346132.318590.4892786160.59472933178.49869811.8211.60540677117.3921790837.0100003.98370.315522500843596533333563.200562.970563.2471785.424639797351.075168.8194901.1276.76747.9794895.5588.25016382651068103.8264.412640.24.410.8134626.034466761.0757713324299.214145901.080116.243108191556307000302.4053.527879438.68809.6260119100.557.366105641.6266161610004.225.472157840000022913399.1686.099.5361277.621276.0447.8943.42926.61392.849203.6341271.91109.63649.017878.588.620665652.7490.5277.20346.8979.1499.408507405.87941.015448812.014.60378.41674.8617.132.74337.289.1429.9433.140.5316.409952.35373.9930513666672494318.573481.05482.020.0400.2180.2211.3369146.881190.41127.740128.5863319.34240.2056823.819.025537.7274.952.240317412.6822.0027.240.0941069367331273.360.30213927.913840.37.3920.8543.4310655060.2351251.250.7791013618165.5865.43452347151.24624124952719.9857143284.76370857367202.1012657.848.4793.63350.9313698.713715.07.19213158715211082.36598.5100.9305.361238611.9779.158616.0960.71778260.2076805.5807290.86930895.311330.1295142269.67351075451541497.93765.88412.8468062.997590.5991407850.65401042399.8077579.3251.6626079892.9842086609.9780104.87450.376992527493012066667658.660659.191657.8762149.155479419412.072174.8125484.6767.85378.5345391.9907.01114463721217101.5353.932338.93.980.9435129.080520991.1743414399337.355165901.174867.053116212961404000339.0693.67278105489798.6238935106.556.753116891.688593641105327.784172190000025068384.0391.9710.3991358.561357.2952.2173.65924.84388.946220.9441356.91118.44751.978408.589.163709605.5089.0916.69322.4284.8649.968532756.13138.41538254.657668.1016.292.77839.129.5728.9134.560.515.683753.83354.2129404666672389519.298463.12464.570.0420.284.1301.3979178.971145.50127.343127.2983229.22232.5727007.319.479439.0724.782.273313412.7521.420.0951057125329872.140.30313895.313793.47.504530.8210674860.2351228.630.78676236321518349565.063681312177.236861461.22158.41114.60398.7104316.141208476.1759.029485.0190.78819258.9051770.5091140.87078494.41056.420.1695383281.14651815524561098.53820.77422.1388073.003410.6021557720.64625234979.29427111.2301.64268741.4109.6972110880.4279784.86990.377733534973005033333659.265658.277658.0382038.1554410197420.853173.2265525.7107.83708.5675529.4027.23114671791039103.5984.022392.64.130.935126.911521301.1974712576341.847166501.178947.078115211262467333325.3893.61144101799247.3242700107.466.934113191.701588941071127.386171800000023774388.9893.0510.4171379.511382.4151.4543.64326.49406.027215.5651375.71116.65552.368134.593.656689615.6290.4326.87330.5379.1159.570520546.10739.03530194.731682.8717.032.91839.329.1028.6035.260.5216.146852.87364.1229428666672484518.883472.61477.670.0400.2181.4531.3729248.891171.04127.775127.7413235.94231.2386974.019.458338.5484.842.225320612.7721.640.0931076357336972.600.29713806.613906.17.469536.7110897310.2301245.110.78247635973555950556.2592.971350.01065.6015992359100.5104334.3511889.22757.2451376.81640695.011530.17699.33462.6680977134199.60026011.905794110.7022176407.6659294.81605471030557666672148.845507.69898.7097.47314942501082103.00527.0575139112765165907.003216160886333102059238.4243861111.27110441.777563691067527.103167980000024888388.8894.4010.29151.0343.60727.0151.178231.189.432694611.7390.2646.95329.32517066.14938.86531024.722647.8217.3739.719.4128.7935.260.5215.498951.32366.3929894000002366118.314472.32478.160.0420.211.3869263.551180.443182.35233.5146948.219.618937.9484.842.2743166222.110.0921090824338371.790.29613857.413882.27.381538.2810901600.2301249.743571863.9649149612101.37059194410172654.72184.090.0192.001866838.08915.649845.1411.1704441.6451885.5195310.83392178.811650.1837144211.73369457477718984.44594.27319.7876172.287550.4597246140.55423133146.02864611.6901.59597783116.4931720060.4413073.88110.301885444123606466667544.099544.306544.6001785.4546611325355.059178.8524891.0726.64099.2804887.5738.14216978461057103.9294.332725.74.470.8238625.783455211.0448413192304.996161461.038996.578122192957411333303.8063.415839603.28902.12646377.403106691022725.598160963333323111398.969.7251259.591267.1848.1273.543382.985205.0341268.08109.81150.327784.886.742660638.1091.986343.8582.8279.494496855.94840.9511.3779.23690.942.81630.4416.058153.77373.89310040000019.126476.95478.6282.9491.3519021.831188.43128.008127.7683298.29240.4056875.318.912738.3382.262310132.7327.2973.5113561.513562.50.81541.581251.910.7732333610065.5765.68438345955.2477326531OpenBenchmarking.org

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0246810SE +/- 0.00123, N = 3SE +/- 0.00568, N = 3SE +/- 0.02683, N = 3SE +/- 0.03687, N = 3SE +/- 0.00485, N = 31.444251.457577.192137.236861.37059-fopenmp=libomp - MIN: 1.34-fopenmp=libomp - MIN: 1.35-fopenmp - MIN: 6.14-fopenmp - MIN: 6.18-fopenmp=libomp - MIN: 1.281. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.03691215Min: 1.44 / Avg: 1.44 / Max: 1.45Min: 1.45 / Avg: 1.46 / Max: 1.47Min: 7.16 / Avg: 7.19 / Max: 7.24Min: 7.2 / Avg: 7.24 / Max: 7.31Min: 1.36 / Avg: 1.37 / Max: 1.381. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dCOPYClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0400800120016002000SE +/- 15.32, N = 11SE +/- 8.32, N = 15SE +/- 9.19, N = 15SE +/- 131.59, N = 12SE +/- 2.67, N = 15SE +/- 9.88, N = 12604.01877.01587.01461.21599.01944.0-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dCOPYClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.030060090012001500Min: 563 / Avg: 603.82 / Max: 751Min: 1830 / Avg: 1876.67 / Max: 1940Min: 1470 / Avg: 1586.67 / Max: 1620Min: 14.2 / Avg: 1461.18 / Max: 1610Min: 1580 / Avg: 1599.33 / Max: 1620Min: 1900 / Avg: 1944.17 / Max: 20201. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dAXPYClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.05001000150020002500SE +/- 20.06, N = 12SE +/- 1.59, N = 15SE +/- 2.06, N = 15SE +/- 194.35, N = 12SE +/- 2.74, N = 15SE +/- 3.59, N = 12878.01043.01521.02158.42359.01017.0-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dAXPYClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0400800120016002000Min: 830 / Avg: 877.67 / Max: 1090Min: 1030 / Avg: 1043.33 / Max: 1050Min: 1510 / Avg: 1520.67 / Max: 1530Min: 20.9 / Avg: 2158.41 / Max: 2370Min: 2340 / Avg: 2358.67 / Max: 2380Min: 979 / Avg: 1017.42 / Max: 10301. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1Clang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3AMD AOCC 3.06001200180024003000SE +/- 2.64, N = 3SE +/- 1.69, N = 3SE +/- 6.09, N = 3SE +/- 0.16, N = 3SE +/- 0.48, N = 3SE +/- 8.09, N = 32718.531872.762719.991082.371114.602654.721. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: DXT1Clang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3AMD AOCC 3.05001000150020002500Min: 2714.63 / Avg: 2718.53 / Max: 2723.56Min: 1869.74 / Avg: 1872.76 / Max: 1875.59Min: 2708.19 / Avg: 2719.99 / Max: 2728.51Min: 1082.06 / Avg: 1082.37 / Max: 1082.59Min: 1114 / Avg: 1114.6 / Max: 1115.55Min: 2638.81 / Avg: 2654.72 / Max: 2665.191. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NNClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.020406080100SE +/- 0.05, N = 12SE +/- 0.06, N = 15SE +/- 0.16, N = 15SE +/- 1.05, N = 12SE +/- 0.29, N = 15SE +/- 0.04, N = 1248.683.698.598.7100.584.0-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NNClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.020406080100Min: 48.1 / Avg: 48.58 / Max: 48.8Min: 83.3 / Avg: 83.64 / Max: 83.9Min: 97.2 / Avg: 98.53 / Max: 99.6Min: 90.6 / Avg: 98.73 / Max: 101Min: 96.8 / Avg: 100.45 / Max: 101Min: 83.7 / Avg: 83.95 / Max: 84.11. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TNClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.020406080100SE +/- 0.09, N = 12SE +/- 0.02, N = 15SE +/- 0.08, N = 15SE +/- 0.62, N = 12SE +/- 0.05, N = 1251.988.3100.9104.0104.090.0-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TNClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.020406080100Min: 51.3 / Avg: 51.89 / Max: 52.2Min: 88.1 / Avg: 88.28 / Max: 88.4Min: 99.8 / Avg: 100.92 / Max: 101Min: 98 / Avg: 104.33 / Max: 107Min: 89.6 / Avg: 89.95 / Max: 90.21. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Chimera 1080p 10-bitClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.070140210280350SE +/- 0.93, N = 3SE +/- 0.48, N = 3SE +/- 0.71, N = 3SE +/- 0.21, N = 3SE +/- 1.11, N = 3SE +/- 0.39, N = 3308.32184.19305.36316.14334.35192.00MIN: 220.53 / MAX: 490.51-lm - MIN: 114.52 / MAX: 310.5-lm - MIN: 210.86 / MAX: 493.21-lm - MIN: 218.19 / MAX: 515.85-lm - MIN: 234.24 / MAX: 544.9-lm - MIN: 118.57 / MAX: 324.981. (CC) gcc options: -O3 -march=native -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Chimera 1080p 10-bitClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.060120180240300Min: 306.49 / Avg: 308.32 / Max: 309.53Min: 183.38 / Avg: 184.19 / Max: 185.04Min: 304.1 / Avg: 305.36 / Max: 306.57Min: 315.9 / Avg: 316.14 / Max: 316.56Min: 332.43 / Avg: 334.35 / Max: 336.28Min: 191.22 / Avg: 192 / Max: 192.421. (CC) gcc options: -O3 -march=native -pthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.05001000150020002500SE +/- 41.63, N = 12SE +/- 27.29, N = 3SE +/- 18.77, N = 3SE +/- 14.93, N = 3SE +/- 17.34, N = 3SE +/- 52.84, N = 152136203412381208118818661. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: ResizingClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0400800120016002000Min: 1981 / Avg: 2135.5 / Max: 2507Min: 1980 / Avg: 2034.33 / Max: 2066Min: 1201 / Avg: 1238 / Max: 1262Min: 1185 / Avg: 1208 / Max: 1236Min: 1153 / Avg: 1187.67 / Max: 1206Min: 1534 / Avg: 1865.8 / Max: 24151. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305 - DecryptClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.02004006008001000SE +/- 4.64, N = 3SE +/- 0.16, N = 3SE +/- 0.40, N = 3SE +/- 0.02, N = 3SE +/- 3.17, N = 3843.40840.64611.98476.18838.091. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305 - DecryptClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0150300450600750Min: 837.08 / Avg: 843.4 / Max: 852.45Min: 840.37 / Avg: 840.64 / Max: 840.94Min: 611.19 / Avg: 611.98 / Max: 612.46Min: 476.14 / Avg: 476.18 / Max: 476.2Min: 832.58 / Avg: 838.09 / Max: 843.551. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.048121620SE +/- 0.023, N = 3SE +/- 0.009, N = 3SE +/- 0.014, N = 3SE +/- 0.014, N = 3SE +/- 0.027, N = 3SE +/- 0.063, N = 315.87015.5999.1589.0299.22715.6491. (CC) gcc options: -lm -lpthread -O3 -march=native
OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per PixelClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.048121620Min: 15.83 / Avg: 15.87 / Max: 15.9Min: 15.59 / Avg: 15.6 / Max: 15.62Min: 9.13 / Avg: 9.16 / Max: 9.18Min: 9 / Avg: 9.03 / Max: 9.05Min: 9.18 / Avg: 9.23 / Max: 9.27Min: 15.54 / Avg: 15.65 / Max: 15.761. (CC) gcc options: -lm -lpthread -O3 -march=native

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305Clang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.02004006008001000SE +/- 4.85, N = 3SE +/- 0.62, N = 3SE +/- 0.13, N = 3SE +/- 0.28, N = 3SE +/- 3.15, N = 3850.50848.24616.10485.02845.141. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305Clang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0150300450600750Min: 844.74 / Avg: 850.5 / Max: 860.13Min: 847 / Avg: 848.24 / Max: 848.99Min: 615.87 / Avg: 616.1 / Max: 616.32Min: 484.47 / Avg: 485.02 / Max: 485.34Min: 839.77 / Avg: 845.14 / Max: 850.671. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.00.26380.52760.79141.05521.319SE +/- 0.004576, N = 3SE +/- 0.006530, N = 3SE +/- 0.003430, N = 3SE +/- 0.005622, N = 3SE +/- 0.004625, N = 31.1725801.1514000.7177820.7881921.170440-fopenmp=libomp - MIN: 1.12-fopenmp=libomp - MIN: 1.09-fopenmp - MIN: 0.67-fopenmp - MIN: 0.74-fopenmp=libomp - MIN: 1.111. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0246810Min: 1.17 / Avg: 1.17 / Max: 1.18Min: 1.14 / Avg: 1.15 / Max: 1.16Min: 0.71 / Avg: 0.72 / Max: 0.72Min: 0.78 / Avg: 0.79 / Max: 0.8Min: 1.16 / Avg: 1.17 / Max: 1.181. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

LibRaw

LibRaw is a RAW image decoder for digital camera photos. This test profile runs LibRaw's post-processing benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.01326395265SE +/- 0.12, N = 3SE +/- 0.33, N = 3SE +/- 0.19, N = 3SE +/- 0.23, N = 3SE +/- 0.16, N = 3SE +/- 0.04, N = 341.7838.7160.2058.9057.2441.641. (CXX) g++ options: -O3 -march=native -fopenmp -ljpeg -lz -lm
OpenBenchmarking.orgMpix/sec, More Is BetterLibRaw 0.20Post-Processing BenchmarkClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.01224364860Min: 41.59 / Avg: 41.78 / Max: 42.01Min: 38.07 / Avg: 38.71 / Max: 39.13Min: 59.84 / Avg: 60.2 / Max: 60.48Min: 58.44 / Avg: 58.9 / Max: 59.17Min: 56.92 / Avg: 57.24 / Max: 57.42Min: 41.57 / Avg: 41.64 / Max: 41.721. (CXX) g++ options: -O3 -march=native -fopenmp -ljpeg -lz -lm

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.016K32K48K64K80KSE +/- 10.95, N = 3SE +/- 4.51, N = 3SE +/- 971.24, N = 3SE +/- 23.55, N = 3SE +/- 42.62, N = 3SE +/- 242.64, N = 351596.8751900.4376805.5851770.5151376.8251885.521. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Bonds OpenMPClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.013K26K39K52K65KMin: 51580.59 / Avg: 51596.87 / Max: 51617.69Min: 51893.82 / Avg: 51900.43 / Max: 51909.05Min: 74863.8 / Avg: 76805.58 / Max: 77821.41Min: 51724.16 / Avg: 51770.51 / Max: 51800.92Min: 51291.61 / Avg: 51376.82 / Max: 51421.39Min: 51642.07 / Avg: 51885.52 / Max: 52370.791. (CXX) g++ options: -O3 -march=native -fopenmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.00.27480.54960.82441.09921.374SE +/- 0.018279, N = 4SE +/- 0.000480, N = 3SE +/- 0.001032, N = 3SE +/- 0.001247, N = 3SE +/- 0.000645, N = 31.2213200.8411690.8693080.8707840.833921-fopenmp=libomp - MIN: 1.13-fopenmp=libomp - MIN: 0.82-fopenmp - MIN: 0.84-fopenmp - MIN: 0.84-fopenmp=libomp - MIN: 0.811. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0246810Min: 1.18 / Avg: 1.22 / Max: 1.25Min: 0.84 / Avg: 0.84 / Max: 0.84Min: 0.87 / Avg: 0.87 / Max: 0.87Min: 0.87 / Avg: 0.87 / Max: 0.87Min: 0.83 / Avg: 0.83 / Max: 0.841. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NTClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.020406080100SE +/- 0.56, N = 12SE +/- 0.03, N = 15SE +/- 0.07, N = 15SE +/- 0.59, N = 12SE +/- 0.08, N = 15SE +/- 0.07, N = 1265.779.395.394.495.078.8-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-NTClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.020406080100Min: 59.7 / Avg: 65.68 / Max: 66.7Min: 79.1 / Avg: 79.32 / Max: 79.4Min: 94.7 / Avg: 95.28 / Max: 95.6Min: 90 / Avg: 94.36 / Max: 95.9Min: 94.6 / Avg: 94.97 / Max: 95.5Min: 78.2 / Avg: 78.83 / Max: 79.11. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dDOTClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.030060090012001500SE +/- 17.06, N = 12SE +/- 1.49, N = 15SE +/- 1.59, N = 15SE +/- 95.41, N = 12SE +/- 1.87, N = 15SE +/- 2.61, N = 12819.00933.001133.001056.421153.001165.00-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dDOTClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.02004006008001000Min: 782 / Avg: 818.83 / Max: 1000Min: 921 / Avg: 933.33 / Max: 943Min: 1120 / Avg: 1133.33 / Max: 1140Min: 7.03 / Avg: 1056.42 / Max: 1160Min: 1140 / Avg: 1153.33 / Max: 1160Min: 1150 / Avg: 1165 / Max: 11801. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.00.04120.08240.12360.16480.206SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.1830.1810.1290.1690.1760.1831. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 0 - Input: 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.012345Min: 0.18 / Avg: 0.18 / Max: 0.18Min: 0.18 / Avg: 0.18 / Max: 0.18Min: 0.13 / Avg: 0.13 / Max: 0.13Min: 0.17 / Avg: 0.17 / Max: 0.17Min: 0.18 / Avg: 0.18 / Max: 0.18Min: 0.18 / Avg: 0.18 / Max: 0.181. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ ThreadsClang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3AMD AOCC 3.015003000450060007500SE +/- 30.90, N = 3SE +/- 25.04, N = 3SE +/- 15.06, N = 3SE +/- 8.33, N = 3SE +/- 6.12, N = 3SE +/- 24.26, N = 3722063957143514253837144-lm -lgcc -lgcc_s -lc-lm -lgcc -lgcc_s -lc-flto-lm -lgcc -lgcc_s -lc-lm -lgcc -lgcc_s -lc-lm -lgcc -lgcc_s -lc1. (CXX) g++ options: -O3 -march=native -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ ThreadsClang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3AMD AOCC 3.013002600390052006500Min: 7171 / Avg: 7219.67 / Max: 7277Min: 6365 / Avg: 6395.33 / Max: 6445Min: 7121 / Avg: 7143.33 / Max: 7172Min: 5130 / Avg: 5142 / Max: 5158Min: 5372 / Avg: 5383.33 / Max: 5393Min: 7106 / Avg: 7143.67 / Max: 71891. (CXX) g++ options: -O3 -march=native -lpthread

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1Clang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3AMD AOCC 3.060120180240300SE +/- 0.11, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.00, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 3284.64205.07284.76269.67281.15211.731. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC1Clang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3AMD AOCC 3.050100150200250Min: 284.42 / Avg: 284.64 / Max: 284.76Min: 205 / Avg: 205.07 / Max: 205.11Min: 284.64 / Avg: 284.76 / Max: 284.84Min: 269.67 / Avg: 269.67 / Max: 269.68Min: 281.1 / Avg: 281.15 / Max: 281.21Min: 211.72 / Avg: 211.73 / Max: 211.741. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

toyBrot Fractal Generator

ToyBrot is a Mandelbrot fractal generator supporting C++ threads/tasks, OpenMP, Intel Threaded Building Blocks (TBB), and other targets. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: TBBClang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3AMD AOCC 3.015003000450060007500SE +/- 87.21, N = 3SE +/- 67.11, N = 7SE +/- 86.43, N = 3SE +/- 74.84, N = 3SE +/- 67.68, N = 3SE +/- 52.54, N = 3678062477085510751816945-lm -lgcc -lgcc_s -lc-lm -lgcc -lgcc_s -lc-flto-lm -lgcc -lgcc_s -lc-lm -lgcc -lgcc_s -lc-lm -lgcc -lgcc_s -lc1. (CXX) g++ options: -O3 -march=native -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: TBBClang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3AMD AOCC 3.012002400360048006000Min: 6610 / Avg: 6780.33 / Max: 6898Min: 6089 / Avg: 6247.43 / Max: 6556Min: 6982 / Avg: 7085.33 / Max: 7257Min: 4995 / Avg: 5107 / Max: 5249Min: 5105 / Avg: 5181 / Max: 5316Min: 6889 / Avg: 6945 / Max: 70501. (CXX) g++ options: -O3 -march=native -lpthread

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: OpenMPClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.016003200480064008000SE +/- 14.89, N = 3SE +/- 20.42, N = 3SE +/- 3.18, N = 3SE +/- 2.60, N = 3SE +/- 22.73, N = 3750770295451552474771. (CXX) g++ options: -O3 -march=native -lpthread -lm -lgcc -lgcc_s -lc
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: OpenMPClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.013002600390052006500Min: 7478 / Avg: 7506.67 / Max: 7528Min: 6989 / Avg: 7029.33 / Max: 7055Min: 5446 / Avg: 5451.33 / Max: 5457Min: 5520 / Avg: 5524.33 / Max: 5529Min: 7451 / Avg: 7476.67 / Max: 75221. (CXX) g++ options: -O3 -march=native -lpthread -lm -lgcc -lgcc_s -lc

OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ TasksClang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3AMD AOCC 3.016003200480064008000SE +/- 33.67, N = 3SE +/- 7.31, N = 3SE +/- 17.21, N = 3SE +/- 49.08, N = 3SE +/- 31.52, N = 3SE +/- 41.46, N = 3743768367367541456107189-lm -lgcc -lgcc_s -lc-lm -lgcc -lgcc_s -lc-flto-lm -lgcc -lgcc_s -lc-lm -lgcc -lgcc_s -lc-lm -lgcc -lgcc_s -lc1. (CXX) g++ options: -O3 -march=native -lpthread
OpenBenchmarking.orgms, Fewer Is BettertoyBrot Fractal Generator 2020-11-18Implementation: C++ TasksClang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3AMD AOCC 3.013002600390052006500Min: 7402 / Avg: 7436.67 / Max: 7504Min: 6826 / Avg: 6835.67 / Max: 6850Min: 7342 / Avg: 7367 / Max: 7400Min: 5316 / Avg: 5413.67 / Max: 5471Min: 5547 / Avg: 5609.67 / Max: 5647Min: 7121 / Avg: 7188.67 / Max: 72641. (CXX) g++ options: -O3 -march=native -lpthread

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TTClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.020406080100SE +/- 0.07, N = 12SE +/- 0.02, N = 14SE +/- 0.05, N = 15SE +/- 0.60, N = 12SE +/- 0.05, N = 15SE +/- 0.08, N = 1273.084.097.998.599.384.4-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL
OpenBenchmarking.orgGFLOPs/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMM-TTClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.020406080100Min: 72.4 / Avg: 73.01 / Max: 73.3Min: 83.8 / Avg: 84.04 / Max: 84.1Min: 97.4 / Avg: 97.95 / Max: 98.1Min: 92.2 / Avg: 98.53 / Max: 101Min: 98.9 / Avg: 99.33 / Max: 99.5Min: 83.6 / Avg: 84.38 / Max: 84.51. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL

SciMark

This test runs the ANSI C version of SciMark 2.0, which is a benchmark for scientific and numerical computing developed by programmers at the National Institute of Standards and Technology. This test is made up of Fast Foruier Transform, Jacobi Successive Over-relaxation, Monte Carlo, Sparse Matrix Multiply, and dense LU matrix factorization benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: Sparse Matrix MultiplyClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.010002000300040005000SE +/- 10.41, N = 3SE +/- 3.87, N = 3SE +/- 1.69, N = 3SE +/- 0.86, N = 3SE +/- 0.39, N = 3SE +/- 5.98, N = 34280.224590.373765.883820.773462.664594.271. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: Sparse Matrix MultiplyClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.08001600240032004000Min: 4266.04 / Avg: 4280.22 / Max: 4300.51Min: 4583.08 / Avg: 4590.37 / Max: 4596.26Min: 3762.85 / Avg: 3765.88 / Max: 3768.7Min: 3819.55 / Avg: 3820.77 / Max: 3822.42Min: 3461.9 / Avg: 3462.66 / Max: 3463.15Min: 4587.89 / Avg: 4594.27 / Max: 4606.221. (CC) gcc options: -O3 -march=native -lm

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: BlowfishClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.090180270360450SE +/- 0.05, N = 3SE +/- 1.73, N = 3SE +/- 0.09, N = 3SE +/- 0.11, N = 3SE +/- 1.14, N = 3380.05319.23412.85422.14319.791. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: BlowfishClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.080160240320400Min: 379.98 / Avg: 380.05 / Max: 380.14Min: 315.77 / Avg: 319.23 / Max: 321.03Min: 412.71 / Avg: 412.85 / Max: 413.02Min: 421.98 / Avg: 422.14 / Max: 422.34Min: 317.52 / Avg: 319.79 / Max: 321.081. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.02004006008001000SE +/- 2.03, N = 3SE +/- 0.58, N = 36146138068078096171. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SharpenClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0140280420560700Min: 802 / Avg: 805.67 / Max: 809Min: 806 / Avg: 807 / Max: 8081. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.00.67581.35162.02742.70323.379SE +/- 0.02100, N = 3SE +/- 0.02389, N = 3SE +/- 0.00845, N = 3SE +/- 0.00883, N = 3SE +/- 0.00564, N = 32.367972.318592.997593.003412.28755-fopenmp=libomp - MIN: 2.01-fopenmp=libomp - MIN: 1.92-fopenmp - MIN: 2.24-fopenmp - MIN: 2.35-fopenmp=libomp - MIN: 1.911. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0246810Min: 2.33 / Avg: 2.37 / Max: 2.41Min: 2.29 / Avg: 2.32 / Max: 2.37Min: 2.98 / Avg: 3 / Max: 3.01Min: 2.99 / Avg: 3 / Max: 3.01Min: 2.28 / Avg: 2.29 / Max: 2.31. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.00.13550.2710.40650.5420.6775SE +/- 0.002843, N = 3SE +/- 0.001652, N = 3SE +/- 0.001469, N = 3SE +/- 0.001964, N = 3SE +/- 0.000365, N = 30.4919400.4892780.5991400.6021550.459724-fopenmp=libomp - MIN: 0.47-fopenmp=libomp - MIN: 0.46-fopenmp - MIN: 0.56-fopenmp - MIN: 0.57-fopenmp=libomp - MIN: 0.441. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0246810Min: 0.49 / Avg: 0.49 / Max: 0.5Min: 0.49 / Avg: 0.49 / Max: 0.49Min: 0.6 / Avg: 0.6 / Max: 0.6Min: 0.6 / Avg: 0.6 / Max: 0.61Min: 0.46 / Avg: 0.46 / Max: 0.461. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.02004006008001000SE +/- 0.67, N = 3SE +/- 0.88, N = 3SE +/- 0.67, N = 3SE +/- 1.20, N = 36056167857727716141. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color SpaceClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0140280420560700Min: 604 / Avg: 604.67 / Max: 606Min: 614 / Avg: 615.67 / Max: 617Min: 771 / Avg: 771.67 / Max: 773Min: 612 / Avg: 613.67 / Max: 6161. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.00.15980.31960.47940.63920.799SE +/- 0.011383, N = 3SE +/- 0.008914, N = 3SE +/- 0.003317, N = 3SE +/- 0.003112, N = 3SE +/- 0.000764, N = 30.7101240.5947290.6540100.6462520.554231-fopenmp=libomp - MIN: 0.64-fopenmp=libomp - MIN: 0.53-fopenmp - MIN: 0.59-fopenmp - MIN: 0.6-fopenmp=libomp - MIN: 0.51. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0246810Min: 0.7 / Avg: 0.71 / Max: 0.73Min: 0.58 / Avg: 0.59 / Max: 0.61Min: 0.65 / Avg: 0.65 / Max: 0.66Min: 0.64 / Avg: 0.65 / Max: 0.65Min: 0.55 / Avg: 0.55 / Max: 0.561. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

FinanceBench

FinanceBench is a collection of financial program benchmarks with support for benchmarking on the GPU via OpenCL and CPU benchmarking with OpenMP. The FinanceBench test cases are focused on Black-Sholes-Merton Process with Analytic European Option engine, QMC (Sobol) Monte-Carlo method (Equity Option Example), Bonds Fixed-rate bond with flat forward curve, and Repo Securities repurchase agreement. FinanceBench was originally written by the Cavazos Lab at University of Delaware. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.09K18K27K36K45KSE +/- 64.93, N = 3SE +/- 0.81, N = 3SE +/- 453.41, N = 14SE +/- 102.23, N = 3SE +/- 3.94, N = 3SE +/- 9.32, N = 333246.8433178.5042399.8134979.2934199.6033146.031. (CXX) g++ options: -O3 -march=native -fopenmp
OpenBenchmarking.orgms, Fewer Is BetterFinanceBench 2016-07-25Benchmark: Repo OpenMPClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.07K14K21K28K35KMin: 33177.27 / Avg: 33246.84 / Max: 33376.58Min: 33177.27 / Avg: 33178.5 / Max: 33180.02Min: 41504.84 / Avg: 42399.81 / Max: 48185.95Min: 34850.04 / Avg: 34979.29 / Max: 35181.11Min: 34191.73 / Avg: 34199.6 / Max: 34203.75Min: 33130.82 / Avg: 33146.03 / Max: 33162.981. (CXX) g++ options: -O3 -march=native -fopenmp

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.03691215SE +/- 0.170, N = 3SE +/- 0.164, N = 4SE +/- 0.086, N = 3SE +/- 0.111, N = 9SE +/- 0.139, N = 3SE +/- 0.189, N = 311.47411.8219.32511.23011.90511.6901. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 4 - Input: 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.03691215Min: 11.18 / Avg: 11.47 / Max: 11.77Min: 11.36 / Avg: 11.82 / Max: 12.13Min: 9.2 / Avg: 9.32 / Max: 9.49Min: 10.94 / Avg: 11.23 / Max: 11.76Min: 11.63 / Avg: 11.91 / Max: 12.06Min: 11.31 / Avg: 11.69 / Max: 11.91. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.00.45810.91621.37431.83242.2905SE +/- 0.01922, N = 12SE +/- 0.00118, N = 3SE +/- 0.01150, N = 3SE +/- 0.00384, N = 3SE +/- 0.00195, N = 32.036061.605401.662601.642681.59597-fopenmp=libomp - MIN: 1.81-fopenmp=libomp - MIN: 1.55-fopenmp - MIN: 1.59-fopenmp - MIN: 1.58-fopenmp=libomp - MIN: 1.541. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0246810Min: 1.96 / Avg: 2.04 / Max: 2.24Min: 1.6 / Avg: 1.61 / Max: 1.61Min: 1.64 / Avg: 1.66 / Max: 1.68Min: 1.64 / Avg: 1.64 / Max: 1.65Min: 1.59 / Avg: 1.6 / Max: 1.61. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-TClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.02004006008001000SE +/- 4.04, N = 12SE +/- 1.41, N = 14SE +/- 2.10, N = 14SE +/- 66.49, N = 12SE +/- 2.88, N = 15SE +/- 1.94, N = 12626.0677.0798.0741.4794.0783.0-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-TClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0140280420560700Min: 586 / Avg: 626 / Max: 642Min: 669 / Avg: 676.79 / Max: 685Min: 781 / Avg: 798.21 / Max: 809Min: 10.1 / Avg: 741.43 / Max: 812Min: 762 / Avg: 794.27 / Max: 809Min: 769 / Avg: 783.25 / Max: 7901. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL

SVT-AV1

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0306090120150SE +/- 0.10, N = 3SE +/- 0.46, N = 3SE +/- 0.83, N = 3SE +/- 1.05, N = 3SE +/- 0.18, N = 3SE +/- 0.33, N = 3118.07117.3992.98109.70110.70116.491. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 0.8Encoder Mode: Enc Mode 8 - Input: 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.020406080100Min: 117.91 / Avg: 118.07 / Max: 118.26Min: 116.49 / Avg: 117.39 / Max: 118.04Min: 91.43 / Avg: 92.98 / Max: 94.26Min: 107.67 / Avg: 109.7 / Max: 111.19Min: 110.38 / Avg: 110.7 / Max: 111Min: 115.9 / Avg: 116.49 / Max: 117.051. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0500K1000K1500K2000K2500KSE +/- 984.68, N = 3SE +/- 971.31, N = 3SE +/- 4791.32, N = 3SE +/- 2170.85, N = 3SE +/- 5755.65, N = 3SE +/- 3670.84, N = 31785466.281790837.012086609.982110880.432176407.671720060.441. (CC) gcc options: -O2 -O3 -march=native -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per SecondClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0400K800K1200K1600K2000KMin: 1783537.12 / Avg: 1785466.28 / Max: 1786773.69Min: 1789333.89 / Avg: 1790837.01 / Max: 1792654.32Min: 2078090.75 / Avg: 2086609.98 / Max: 2094669.23Min: 2106562.44 / Avg: 2110880.43 / Max: 2113431.85Min: 2166828.9 / Avg: 2176407.67 / Max: 2186725.89Min: 1714611.03 / Avg: 1720060.44 / Max: 1727045.811. (CC) gcc options: -O2 -O3 -march=native -lrt" -lrt

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: MediumClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.01.09682.19363.29044.38725.484SE +/- 0.0116, N = 3SE +/- 0.0013, N = 3SE +/- 0.0035, N = 3SE +/- 0.0047, N = 3SE +/- 0.0099, N = 3SE +/- 0.0042, N = 34.00583.98374.87454.86994.81603.88111. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: MediumClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0246810Min: 3.99 / Avg: 4.01 / Max: 4.03Min: 3.98 / Avg: 3.98 / Max: 3.99Min: 4.87 / Avg: 4.87 / Max: 4.88Min: 4.86 / Avg: 4.87 / Max: 4.88Min: 4.8 / Avg: 4.82 / Max: 4.83Min: 3.87 / Avg: 3.88 / Max: 3.891. (CXX) g++ options: -O3 -march=native -flto -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.00.0850.170.2550.340.425SE +/- 0.000321, N = 3SE +/- 0.000247, N = 3SE +/- 0.000576, N = 3SE +/- 0.004341, N = 3SE +/- 0.000492, N = 30.3136890.3155220.3769920.3777330.301885-fopenmp=libomp - MIN: 0.3-fopenmp=libomp - MIN: 0.3-fopenmp - MIN: 0.36-fopenmp - MIN: 0.36-fopenmp=libomp - MIN: 0.291. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.012345Min: 0.31 / Avg: 0.31 / Max: 0.31Min: 0.32 / Avg: 0.32 / Max: 0.32Min: 0.38 / Avg: 0.38 / Max: 0.38Min: 0.37 / Avg: 0.38 / Max: 0.39Min: 0.3 / Avg: 0.3 / Max: 0.31. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 2048Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.012K24K36K48K60KSE +/- 439.50, N = 3SE +/- 582.34, N = 3SE +/- 725.00, N = 3SE +/- 743.81, N = 3SE +/- 156.75, N = 3SE +/- 756.91, N = 35125450084527495349754710444121. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 2048Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.09K18K27K36K45KMin: 50603 / Avg: 51254 / Max: 52091Min: 49197 / Avg: 50083.67 / Max: 51181Min: 51329 / Avg: 52748.67 / Max: 53714Min: 52018 / Avg: 53497 / Max: 54375Min: 54491 / Avg: 54710.33 / Max: 55014Min: 43637 / Avg: 44412.33 / Max: 459261. (CC) gcc options: -pthread -O3 -march=native -lm

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 128 - Buffer Length: 256 - Filter Length: 57Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0800M1600M2400M3200M4000MSE +/- 883804.91, N = 3SE +/- 1559202.08, N = 3SE +/- 3384441.53, N = 3SE +/- 1679616.36, N = 3SE +/- 6016181.88, N = 3SE +/- 1543084.93, N = 33643766667359653333330120666673005033333305576666736064666671. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 128 - Buffer Length: 256 - Filter Length: 57Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0600M1200M1800M2400M3000MMin: 3642600000 / Avg: 3643766666.67 / Max: 3645500000Min: 3593800000 / Avg: 3596533333.33 / Max: 3599200000Min: 3006000000 / Avg: 3012066666.67 / Max: 3017700000Min: 3002000000 / Avg: 3005033333.33 / Max: 3007800000Min: 3046600000 / Avg: 3055766666.67 / Max: 3067100000Min: 3603400000 / Avg: 3606466666.67 / Max: 36083000001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0140280420560700SE +/- 9.50, N = 3SE +/- 0.83, N = 3SE +/- 0.64, N = 3SE +/- 0.61, N = 3SE +/- 0.53, N = 3593.97563.20658.66659.27544.10-fopenmp=libomp - MIN: 570.44-fopenmp=libomp - MIN: 550.23-fopenmp - MIN: 639.86-fopenmp - MIN: 642.67-fopenmp=libomp - MIN: 532.321. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0120240360480600Min: 581.26 / Avg: 593.97 / Max: 612.56Min: 561.62 / Avg: 563.2 / Max: 564.42Min: 657.74 / Avg: 658.66 / Max: 659.9Min: 658.12 / Avg: 659.27 / Max: 660.2Min: 543.48 / Avg: 544.1 / Max: 545.161. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0140280420560700SE +/- 1.89, N = 3SE +/- 0.25, N = 3SE +/- 1.25, N = 3SE +/- 0.83, N = 3SE +/- 0.90, N = 3590.18562.97659.19658.28544.31-fopenmp=libomp - MIN: 575.41-fopenmp=libomp - MIN: 551.49-fopenmp - MIN: 642.05-fopenmp - MIN: 639.78-fopenmp=libomp - MIN: 531.91. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0120240360480600Min: 586.49 / Avg: 590.18 / Max: 592.75Min: 562.47 / Avg: 562.97 / Max: 563.31Min: 657.87 / Avg: 659.19 / Max: 661.68Min: 656.85 / Avg: 658.28 / Max: 659.72Min: 542.72 / Avg: 544.31 / Max: 545.831. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0140280420560700SE +/- 3.02, N = 3SE +/- 0.10, N = 3SE +/- 0.52, N = 3SE +/- 1.86, N = 3SE +/- 0.62, N = 3597.48563.25657.88658.04544.60-fopenmp=libomp - MIN: 580.8-fopenmp=libomp - MIN: 551.31-fopenmp - MIN: 638.35-fopenmp - MIN: 635.78-fopenmp=libomp - MIN: 532.911. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0120240360480600Min: 591.68 / Avg: 597.48 / Max: 601.87Min: 563.05 / Avg: 563.25 / Max: 563.39Min: 657.35 / Avg: 657.88 / Max: 658.92Min: 654.76 / Avg: 658.04 / Max: 661.2Min: 543.61 / Avg: 544.6 / Max: 545.741. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

SciMark

This test runs the ANSI C version of SciMark 2.0, which is a benchmark for scientific and numerical computing developed by programmers at the National Institute of Standards and Technology. This test is made up of Fast Foruier Transform, Jacobi Successive Over-relaxation, Monte Carlo, Sparse Matrix Multiply, and dense LU matrix factorization benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: Jacobi Successive Over-RelaxationClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.05001000150020002500SE +/- 0.08, N = 3SE +/- 0.12, N = 3SE +/- 0.12, N = 3SE +/- 0.07, N = 3SE +/- 0.10, N = 3SE +/- 0.02, N = 31785.501785.422149.152038.152148.841785.451. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: Jacobi Successive Over-RelaxationClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0400800120016002000Min: 1785.38 / Avg: 1785.5 / Max: 1785.66Min: 1785.18 / Avg: 1785.42 / Max: 1785.56Min: 2148.91 / Avg: 2149.15 / Max: 2149.27Min: 2038.08 / Avg: 2038.15 / Max: 2038.3Min: 2148.66 / Avg: 2148.84 / Max: 2148.99Min: 1785.43 / Avg: 1785.45 / Max: 1785.481. (CC) gcc options: -O3 -march=native -lm

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0120240360480600SE +/- 1.00, N = 3SE +/- 1.00, N = 3SE +/- 0.33, N = 34574635475445504661. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-GaussianClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0100200300400500Min: 455 / Avg: 457 / Max: 458Min: 543 / Avg: 544 / Max: 546Min: 466 / Avg: 466.33 / Max: 4671. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.02K4K6K8K10KSE +/- 88.25, N = 12SE +/- 102.76, N = 8SE +/- 138.76, N = 3SE +/- 7.52, N = 3SE +/- 171.77, N = 39904979794191019711325-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: shufflenet-v2-10 - Device: OpenMP CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.02K4K6K8K10KMin: 9495.5 / Avg: 9903.79 / Max: 10452.5Min: 9469.5 / Avg: 9796.94 / Max: 10334.5Min: 9144.5 / Avg: 9418.67 / Max: 9593Min: 10181.5 / Avg: 10196.5 / Max: 10205Min: 11092.5 / Avg: 11325.33 / Max: 11660.51. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Blowfish - DecryptClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.090180270360450SE +/- 0.04, N = 3SE +/- 2.03, N = 3SE +/- 0.12, N = 3SE +/- 0.95, N = 3SE +/- 1.17, N = 3351.28351.08412.07420.85355.061. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Blowfish - DecryptClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.070140210280350Min: 351.22 / Avg: 351.28 / Max: 351.36Min: 347.01 / Avg: 351.08 / Max: 353.23Min: 411.87 / Avg: 412.07 / Max: 412.27Min: 418.96 / Avg: 420.85 / Max: 421.86Min: 352.73 / Avg: 355.06 / Max: 356.461. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Etcpak

Etcpack is the self-proclaimed "fastest ETC compressor on the planet" with focused on providing open-source, very fast ETC and S3 texture compression support. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2Clang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3AMD AOCC 3.04080120160200SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.04, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 3202.09168.82202.10174.81173.23178.851. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.orgMpx/s, More Is BetterEtcpak 0.7Configuration: ETC2Clang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3AMD AOCC 3.04080120160200Min: 202.05 / Avg: 202.08 / Max: 202.12Min: 168.8 / Avg: 168.82 / Max: 168.85Min: 202.02 / Avg: 202.1 / Max: 202.16Min: 174.63 / Avg: 174.81 / Max: 174.93Min: 173.19 / Avg: 173.23 / Max: 173.3Min: 178.82 / Avg: 178.85 / Max: 178.911. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256Clang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.012002400360048006000SE +/- 2.14, N = 3SE +/- 2.16, N = 3SE +/- 42.69, N = 3SE +/- 4.47, N = 3SE +/- 0.05, N = 34659.344901.135484.685525.714891.071. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256Clang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.010002000300040005000Min: 4655.62 / Avg: 4659.34 / Max: 4663.03Min: 4896.81 / Avg: 4901.13 / Max: 4903.54Min: 5399.32 / Avg: 5484.68 / Max: 5528.86Min: 5518.67 / Avg: 5525.71 / Max: 5534Min: 4891.01 / Avg: 4891.07 / Max: 4891.171. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ThoroughClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0246810SE +/- 0.0028, N = 3SE +/- 0.0026, N = 3SE +/- 0.0029, N = 3SE +/- 0.0011, N = 3SE +/- 0.0034, N = 3SE +/- 0.0015, N = 36.76476.76747.85377.83707.69896.64091. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ThoroughClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.03691215Min: 6.76 / Avg: 6.76 / Max: 6.77Min: 6.76 / Avg: 6.77 / Max: 6.77Min: 7.85 / Avg: 7.85 / Max: 7.86Min: 7.84 / Avg: 7.84 / Max: 7.84Min: 7.69 / Avg: 7.7 / Max: 7.7Min: 6.64 / Avg: 6.64 / Max: 6.641. (CXX) g++ options: -O3 -march=native -flto -pthread

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC format five times. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.03691215SE +/- 0.007, N = 5SE +/- 0.006, N = 5SE +/- 0.011, N = 5SE +/- 0.008, N = 5SE +/- 0.006, N = 5SE +/- 0.006, N = 57.8547.9798.5348.5678.7099.280-fvisibility=hidden-fvisibility=hidden-fvisibility=hidden1. (CXX) g++ options: -O3 -march=native -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLACClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.03691215Min: 7.83 / Avg: 7.85 / Max: 7.87Min: 7.96 / Avg: 7.98 / Max: 7.99Min: 8.49 / Avg: 8.53 / Max: 8.55Min: 8.55 / Avg: 8.57 / Max: 8.59Min: 8.69 / Avg: 8.71 / Max: 8.72Min: 9.26 / Avg: 9.28 / Max: 9.291. (CXX) g++ options: -O3 -march=native -logg -lm

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256 - DecryptClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.012002400360048006000SE +/- 4.78, N = 3SE +/- 1.35, N = 3SE +/- 11.31, N = 3SE +/- 5.42, N = 3SE +/- 3.70, N = 34682.464895.565391.995529.404887.571. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256 - DecryptClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.010002000300040005000Min: 4675.07 / Avg: 4682.46 / Max: 4691.39Min: 4893.3 / Avg: 4895.56 / Max: 4897.96Min: 5369.4 / Avg: 5391.99 / Max: 5404.16Min: 5521.33 / Avg: 5529.4 / Max: 5539.71Min: 4882.01 / Avg: 4887.57 / Max: 4894.581. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0246810SE +/- 0.003, N = 3SE +/- 0.021, N = 3SE +/- 0.019, N = 3SE +/- 0.006, N = 3SE +/- 0.005, N = 3SE +/- 0.008, N = 38.2568.2507.0117.2317.4738.142-ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr-ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr-ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr1. (CC) gcc options: -O3 -pipe -march=native -lncurses -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.03691215Min: 8.25 / Avg: 8.26 / Max: 8.26Min: 8.23 / Avg: 8.25 / Max: 8.29Min: 6.98 / Avg: 7.01 / Max: 7.05Min: 7.22 / Avg: 7.23 / Max: 7.24Min: 7.46 / Avg: 7.47 / Max: 7.48Min: 8.13 / Avg: 8.14 / Max: 8.161. (CC) gcc options: -O3 -pipe -march=native -lncurses -lm

TSCP

This is a performance test of TSCP, Tom Kerrigan's Simple Chess Program, which has a built-in performance benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0400K800K1200K1600K2000KSE +/- 1798.40, N = 5SE +/- 2852.59, N = 5SE +/- 760.80, N = 5SE +/- 956.77, N = 5SE +/- 1626.80, N = 5SE +/- 2098.00, N = 51570966163826514463721467179149425016978461. (CC) gcc options: -O3 -march=native
OpenBenchmarking.orgNodes Per Second, More Is BetterTSCP 1.81AI Chess PerformanceClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0300K600K900K1200K1500KMin: 1569168 / Avg: 1570966.4 / Max: 1578160Min: 1634356 / Avg: 1638264.6 / Max: 1649035Min: 1445611 / Avg: 1446371.8 / Max: 1449415Min: 1464835 / Avg: 1467178.6 / Max: 1468741Min: 1492623 / Avg: 1494249.8 / Max: 1500757Min: 1694701 / Avg: 1697846 / Max: 17051951. (CC) gcc options: -O3 -march=native

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.030060090012001500SE +/- 1.86, N = 3SE +/- 1.53, N = 3SE +/- 0.88, N = 3SE +/- 0.33, N = 3SE +/- 1.53, N = 31076106812171039108210571. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: EnhancedClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.02004006008001000Min: 1072 / Avg: 1075.67 / Max: 1078Min: 1214 / Avg: 1217 / Max: 1219Min: 1037 / Avg: 1038.67 / Max: 1040Min: 1082 / Avg: 1082.33 / Max: 1083Min: 1054 / Avg: 1057 / Max: 10591. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0306090120150SE +/- 0.53, N = 3SE +/- 0.06, N = 3SE +/- 1.32, N = 3SE +/- 0.48, N = 3SE +/- 1.53, N = 3SE +/- 0.22, N = 3118.87103.83101.54103.60103.01103.93-lstdc++-lstdc++-lstdc++-lstdc++1. (CC) gcc options: -O3 -march=native -fopenmp -lm -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C2670Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.020406080100Min: 117.93 / Avg: 118.87 / Max: 119.78Min: 103.71 / Avg: 103.83 / Max: 103.9Min: 99.51 / Avg: 101.54 / Max: 104.01Min: 102.64 / Avg: 103.6 / Max: 104.15Min: 100.55 / Avg: 103 / Max: 105.82Min: 103.61 / Avg: 103.93 / Max: 104.341. (CC) gcc options: -O3 -march=native -fopenmp -lm -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: PartialTweetsClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.01.0352.073.1054.145.175SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 34.604.413.934.024.331. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: PartialTweetsClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0246810Min: 4.59 / Avg: 4.6 / Max: 4.61Min: 4.39 / Avg: 4.41 / Max: 4.42Min: 3.92 / Avg: 3.93 / Max: 3.93Min: 4.02 / Avg: 4.02 / Max: 4.03Min: 4.32 / Avg: 4.33 / Max: 4.341. (CXX) g++ options: -O3 -march=native -pthread

QuantLib

QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21Clang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3AMD AOCC 3.06001200180024003000SE +/- 1.92, N = 3SE +/- 1.01, N = 3SE +/- 1.62, N = 3SE +/- 4.53, N = 3SE +/- 2.06, N = 3SE +/- 2.28, N = 32653.82640.22657.82338.92392.62725.71. (CXX) g++ options: -O3 -march=native -rdynamic
OpenBenchmarking.orgMFLOPS, More Is BetterQuantLib 1.21Clang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3AMD AOCC 3.05001000150020002500Min: 2651.6 / Avg: 2653.77 / Max: 2657.6Min: 2638.4 / Avg: 2640.17 / Max: 2641.9Min: 2654.9 / Avg: 2657.8 / Max: 2660.5Min: 2330.4 / Avg: 2338.87 / Max: 2345.9Min: 2388.5 / Avg: 2392.57 / Max: 2395.2Min: 2722.1 / Avg: 2725.67 / Max: 2729.91. (CXX) g++ options: -O3 -march=native -rdynamic

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: DistinctUserIDClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.01.03952.0793.11854.1585.1975SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 34.624.413.984.134.471. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: DistinctUserIDClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0246810Min: 4.61 / Avg: 4.62 / Max: 4.62Min: 4.41 / Avg: 4.41 / Max: 4.42Min: 3.98 / Avg: 3.98 / Max: 3.99Min: 4.12 / Avg: 4.13 / Max: 4.15Min: 4.47 / Avg: 4.47 / Max: 4.481. (CXX) g++ options: -O3 -march=native -pthread

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: LargeRandomClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.00.21150.4230.63450.8461.0575SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.840.810.940.900.821. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: LargeRandomClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0246810Min: 0.84 / Avg: 0.84 / Max: 0.84Min: 0.81 / Avg: 0.81 / Max: 0.81Min: 0.94 / Avg: 0.94 / Max: 0.94Min: 0.9 / Avg: 0.9 / Max: 0.9Min: 0.82 / Avg: 0.82 / Max: 0.821. (CXX) g++ options: -O3 -march=native -pthread

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.080160240320400SE +/- 4.15, N = 4SE +/- 1.42, N = 3SE +/- 0.50, N = 3SE +/- 0.17, N = 3SE +/- 2.50, N = 3333346351351386-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: yolov4 - Device: OpenMP CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.070140210280350Min: 321.5 / Avg: 332.75 / Max: 341.5Min: 344 / Avg: 345.67 / Max: 348.5Min: 350 / Avg: 350.5 / Max: 351.5Min: 351 / Avg: 351.33 / Max: 351.5Min: 380.5 / Avg: 385.5 / Max: 3881. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, LosslessClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0714212835SE +/- 0.04, N = 3SE +/- 0.22, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 325.2226.0329.0826.9127.0625.781. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6, LosslessClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0612182430Min: 25.16 / Avg: 25.22 / Max: 25.28Min: 25.81 / Avg: 26.03 / Max: 26.48Min: 28.97 / Avg: 29.08 / Max: 29.19Min: 26.84 / Avg: 26.91 / Max: 26.96Min: 26.96 / Avg: 27.06 / Max: 27.11Min: 25.71 / Avg: 25.78 / Max: 25.91. (CXX) g++ options: -O3 -fPIC -lm

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 4096Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.011K22K33K44K55KSE +/- 671.66, N = 15SE +/- 413.24, N = 15SE +/- 228.68, N = 3SE +/- 844.19, N = 3SE +/- 227.13, N = 3SE +/- 542.47, N = 154542846676520995213051391455211. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 4096Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.09K18K27K36K45KMin: 40185 / Avg: 45428.07 / Max: 49157Min: 44248 / Avg: 46676.4 / Max: 49417Min: 51646 / Avg: 52099 / Max: 52380Min: 50448 / Avg: 52130 / Max: 53098Min: 50938 / Avg: 51391.33 / Max: 51643Min: 40923 / Avg: 45521.47 / Max: 481371. (CC) gcc options: -pthread -O3 -march=native -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.00.26940.53880.80821.07761.347SE +/- 0.00286, N = 3SE +/- 0.00395, N = 3SE +/- 0.00597, N = 3SE +/- 0.00438, N = 3SE +/- 0.00668, N = 31.075071.075771.174341.197471.04484-fopenmp=libomp - MIN: 0.87-fopenmp=libomp - MIN: 0.86-fopenmp - MIN: 0.96-fopenmp - MIN: 0.98-fopenmp=libomp - MIN: 0.831. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0246810Min: 1.07 / Avg: 1.08 / Max: 1.08Min: 1.07 / Avg: 1.08 / Max: 1.08Min: 1.16 / Avg: 1.17 / Max: 1.18Min: 1.19 / Avg: 1.2 / Max: 1.21Min: 1.04 / Avg: 1.04 / Max: 1.061. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 32Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.03K6K9K12K15KSE +/- 24.25, N = 3SE +/- 20.33, N = 3SE +/- 67.28, N = 3SE +/- 16.05, N = 3SE +/- 45.16, N = 3SE +/- 41.35, N = 31333313324143991257612765131921. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 32Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.02K4K6K8K10KMin: 13291 / Avg: 13333.33 / Max: 13375Min: 13299 / Avg: 13323.67 / Max: 13364Min: 14284 / Avg: 14399 / Max: 14517Min: 12556 / Avg: 12576.33 / Max: 12608Min: 12678 / Avg: 12764.67 / Max: 12830Min: 13109 / Avg: 13191.67 / Max: 132351. (CC) gcc options: -pthread -O3 -march=native -lm

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: TwofishClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.070140210280350SE +/- 0.13, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 3SE +/- 0.52, N = 3SE +/- 0.03, N = 3315.41299.21337.36341.85305.001. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: TwofishClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.060120180240300Min: 315.17 / Avg: 315.41 / Max: 315.6Min: 299.05 / Avg: 299.21 / Max: 299.37Min: 337.28 / Avg: 337.36 / Max: 337.42Min: 340.81 / Avg: 341.85 / Max: 342.44Min: 304.94 / Avg: 305 / Max: 305.051. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 32Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.04K8K12K16K20KSE +/- 48.79, N = 3SE +/- 129.55, N = 3SE +/- 170.19, N = 8SE +/- 108.41, N = 3SE +/- 168.99, N = 3SE +/- 5.33, N = 31564914590165901665016590161461. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 32Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.03K6K9K12K15KMin: 15564 / Avg: 15648.67 / Max: 15733Min: 14332 / Avg: 14589.67 / Max: 14742Min: 15676 / Avg: 16589.63 / Max: 17094Min: 16434 / Avg: 16650.33 / Max: 16771Min: 16253 / Avg: 16590.33 / Max: 16777Min: 16141 / Avg: 16146.33 / Max: 161571. (CC) gcc options: -pthread -O3 -march=native -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.00.26530.53060.79591.06121.3265SE +/- 0.00199, N = 3SE +/- 0.00127, N = 3SE +/- 0.00349, N = 3SE +/- 0.00296, N = 3SE +/- 0.00160, N = 31.077011.080111.174861.178941.03899-fopenmp=libomp - MIN: 1.04-fopenmp=libomp - MIN: 1.03-fopenmp - MIN: 1.12-fopenmp - MIN: 1.12-fopenmp=libomp - MIN: 0.991. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0246810Min: 1.07 / Avg: 1.08 / Max: 1.08Min: 1.08 / Avg: 1.08 / Max: 1.08Min: 1.17 / Avg: 1.17 / Max: 1.18Min: 1.17 / Avg: 1.18 / Max: 1.18Min: 1.04 / Avg: 1.04 / Max: 1.041. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0246810SE +/- 0.004, N = 3SE +/- 0.018, N = 3SE +/- 0.009, N = 3SE +/- 0.006, N = 3SE +/- 0.021, N = 3SE +/- 0.009, N = 36.3096.2437.0537.0787.0036.578-ltiff1. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest CompressionClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.03691215Min: 6.3 / Avg: 6.31 / Max: 6.32Min: 6.22 / Avg: 6.24 / Max: 6.28Min: 7.04 / Avg: 7.05 / Max: 7.07Min: 7.07 / Avg: 7.08 / Max: 7.09Min: 6.98 / Avg: 7 / Max: 7.05Min: 6.56 / Avg: 6.58 / Max: 6.591. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0306090120150SE +/- 0.50, N = 3SE +/- 0.29, N = 3SE +/- 0.44, N = 3SE +/- 0.17, N = 3SE +/- 0.50, N = 3112108116115122-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: fcn-resnet101-11 - Device: OpenMP CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.020406080100Min: 111.5 / Avg: 112 / Max: 113Min: 107.5 / Avg: 108 / Max: 108.5Min: 115 / Avg: 115.67 / Max: 116.5Min: 115 / Avg: 115.33 / Max: 115.5Min: 120.5 / Avg: 121.5 / Max: 1221. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.05001000150020002500SE +/- 6.57, N = 3SE +/- 12.41, N = 3SE +/- 1.20, N = 3SE +/- 1.20, N = 3SE +/- 4.81, N = 3SE +/- 4.63, N = 31993191521292112216119291. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: SwirlClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0400800120016002000Min: 1980 / Avg: 1992.67 / Max: 2002Min: 1897 / Avg: 1915.33 / Max: 1939Min: 2127 / Avg: 2128.67 / Max: 2131Min: 2110 / Avg: 2112.33 / Max: 2114Min: 2152 / Avg: 2161.33 / Max: 2168Min: 1921 / Avg: 1928.67 / Max: 19371. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.013M26M39M52M65MSE +/- 790005.27, N = 3SE +/- 40360.87, N = 3SE +/- 870702.21, N = 3SE +/- 6887.99, N = 3SE +/- 318169.94, N = 3SE +/- 47026.00, N = 35566300056307000614040006246733360886333574113331. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.011M22M33M44M55MMin: 54083000 / Avg: 55663000 / Max: 56458000Min: 56229000 / Avg: 56307000 / Max: 56364000Min: 59663000 / Avg: 61404000 / Max: 62307000Min: 62454000 / Avg: 62467333.33 / Max: 62477000Min: 60250000 / Avg: 60886333.33 / Max: 61207000Min: 57338000 / Avg: 57411333.33 / Max: 574990001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Twofish - DecryptClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.070140210280350SE +/- 0.16, N = 3SE +/- 0.15, N = 3SE +/- 0.04, N = 3SE +/- 0.44, N = 3SE +/- 0.06, N = 3321.19302.41339.07325.39303.811. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Twofish - DecryptClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.060120180240300Min: 320.87 / Avg: 321.19 / Max: 321.41Min: 302.12 / Avg: 302.41 / Max: 302.62Min: 339.02 / Avg: 339.07 / Max: 339.15Min: 324.51 / Avg: 325.39 / Max: 325.86Min: 303.72 / Avg: 303.81 / Max: 303.911. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.00.82641.65282.47923.30564.132SE +/- 0.01639, N = 3SE +/- 0.04735, N = 3SE +/- 0.03246, N = 3SE +/- 0.02637, N = 3SE +/- 0.02018, N = 33.285073.527873.672783.611443.41583-fopenmp=libomp - MIN: 3.15-fopenmp=libomp - MIN: 3.29-fopenmp - MIN: 3.39-fopenmp - MIN: 3.37-fopenmp=libomp - MIN: 3.241. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0246810Min: 3.25 / Avg: 3.29 / Max: 3.3Min: 3.44 / Avg: 3.53 / Max: 3.61Min: 3.62 / Avg: 3.67 / Max: 3.73Min: 3.57 / Avg: 3.61 / Max: 3.66Min: 3.38 / Avg: 3.42 / Max: 3.451. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 4096Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.02K4K6K8K10KSE +/- 101.36, N = 3SE +/- 15.16, N = 3SE +/- 20.21, N = 3SE +/- 57.26, N = 3SE +/- 48.56, N = 3SE +/- 43.38, N = 39862.09438.610548.010179.010205.09603.21. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 4096Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.02K4K6K8K10KMin: 9659.8 / Avg: 9862.03 / Max: 9975.4Min: 9413.4 / Avg: 9438.63 / Max: 9465.8Min: 10523 / Avg: 10548 / Max: 10588Min: 10071 / Avg: 10179 / Max: 10266Min: 10109 / Avg: 10204.67 / Max: 10267Min: 9536.1 / Avg: 9603.23 / Max: 9684.41. (CC) gcc options: -pthread -O3 -march=native -lm

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 1024Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.02K4K6K8K10KSE +/- 48.25, N = 3SE +/- 45.95, N = 3SE +/- 19.46, N = 3SE +/- 41.68, N = 3SE +/- 25.87, N = 3SE +/- 14.28, N = 39088.38809.69798.69247.39238.48902.11. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 1024Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.02K4K6K8K10KMin: 8998.3 / Avg: 9088.27 / Max: 9163.5Min: 8734.5 / Avg: 8809.57 / Max: 8893Min: 9761.5 / Avg: 9798.63 / Max: 9827.3Min: 9195.6 / Avg: 9247.33 / Max: 9329.8Min: 9186.8 / Avg: 9238.43 / Max: 9267.1Min: 8883.2 / Avg: 8902.1 / Max: 8930.11. (CC) gcc options: -pthread -O3 -march=native -lm

SecureMark

SecureMark is an objective, standardized benchmarking framework for measuring the efficiency of cryptographic processing solutions developed by EEMBC. SecureMark-TLS is benchmarking Transport Layer Security performance with a focus on IoT/edge computing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.060K120K180K240K300KSE +/- 1778.47, N = 3SE +/- 407.86, N = 3SE +/- 537.86, N = 3SE +/- 1024.96, N = 3SE +/- 675.55, N = 3SE +/- 251.99, N = 32652042601192389352427002438612646371. (CC) gcc options: -pedantic -O3
OpenBenchmarking.orgmarks, More Is BetterSecureMark 1.0.4Benchmark: SecureMark-TLSClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.050K100K150K200K250KMin: 261680.5 / Avg: 265203.86 / Max: 267387.75Min: 259393.3 / Avg: 260118.78 / Max: 260804.48Min: 238379.42 / Avg: 238934.71 / Max: 240010.23Min: 240958.31 / Avg: 242700.07 / Max: 244507.08Min: 242701.45 / Avg: 243861.36 / Max: 245041.36Min: 264136.41 / Avg: 264637.17 / Max: 264936.751. (CC) gcc options: -pedantic -O3

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.120406080100SE +/- 0.31, N = 3SE +/- 0.53, N = 3SE +/- 1.10, N = 8SE +/- 1.76, N = 3SE +/- 1.15, N = 8103.17100.55106.55107.46111.271. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.120406080100Min: 102.59 / Avg: 103.17 / Max: 103.67Min: 99.52 / Avg: 100.55 / Max: 101.29Min: 102.8 / Avg: 106.55 / Max: 113.43Min: 105.41 / Avg: 107.46 / Max: 110.96Min: 106.23 / Avg: 111.27 / Max: 116.181. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Compression Effort 5Clang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0246810SE +/- 0.006, N = 3SE +/- 0.022, N = 3SE +/- 0.017, N = 3SE +/- 0.017, N = 3SE +/- 0.028, N = 36.6907.3666.7536.9347.4031. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Compression Effort 5Clang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.03691215Min: 6.68 / Avg: 6.69 / Max: 6.7Min: 7.34 / Avg: 7.37 / Max: 7.41Min: 6.73 / Avg: 6.75 / Max: 6.79Min: 6.91 / Avg: 6.93 / Max: 6.97Min: 7.35 / Avg: 7.4 / Max: 7.441. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 1024Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.03K6K9K12K15KSE +/- 27.10, N = 3SE +/- 35.53, N = 3SE +/- 44.20, N = 3SE +/- 32.26, N = 3SE +/- 189.35, N = 3SE +/- 34.64, N = 31080510564116891131911044106691. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 1024Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.02K4K6K8K10KMin: 10774 / Avg: 10805 / Max: 10859Min: 10508 / Avg: 10564.33 / Max: 10630Min: 11626 / Avg: 11688.67 / Max: 11774Min: 11270 / Avg: 11319.33 / Max: 11380Min: 10680 / Avg: 11044.33 / Max: 11316Min: 10609 / Avg: 10668.67 / Max: 107291. (CC) gcc options: -pthread -O3 -march=native -lm

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.10.39980.79961.19941.59921.999SE +/- 0.004, N = 3SE +/- 0.011, N = 3SE +/- 0.028, N = 3SE +/- 0.013, N = 3SE +/- 0.029, N = 31.6071.6261.6881.7011.7771. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average LatencyClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1246810Min: 1.6 / Avg: 1.61 / Max: 1.61Min: 1.61 / Avg: 1.63 / Max: 1.64Min: 1.63 / Avg: 1.69 / Max: 1.73Min: 1.68 / Avg: 1.7 / Max: 1.73Min: 1.73 / Avg: 1.78 / Max: 1.831. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read WriteClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.113K26K39K52K65KSE +/- 162.92, N = 3SE +/- 400.92, N = 3SE +/- 994.44, N = 3SE +/- 469.64, N = 3SE +/- 899.82, N = 362319616165936458894563691. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read WriteClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.111K22K33K44K55KMin: 62085.76 / Avg: 62318.78 / Max: 62632.53Min: 60930.16 / Avg: 61616.36 / Max: 62318.71Min: 58038.49 / Avg: 59364.02 / Max: 61310.91Min: 57977.98 / Avg: 58894.27 / Max: 59531.3Min: 54698.87 / Avg: 56368.98 / Max: 57784.641. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 2048Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.02K4K6K8K10KSE +/- 7.75, N = 3SE +/- 28.76, N = 3SE +/- 37.69, N = 3SE +/- 14.75, N = 3SE +/- 55.19, N = 3SE +/- 39.89, N = 310467.010004.211053.010711.010675.010227.01. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 1D FFT Size 2048Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.02K4K6K8K10KMin: 10457 / Avg: 10466.67 / Max: 10482Min: 9958 / Avg: 10004.23 / Max: 10057Min: 11007 / Avg: 11053.33 / Max: 11128Min: 10686 / Avg: 10710.67 / Max: 10737Min: 10565 / Avg: 10675.33 / Max: 10733Min: 10160 / Avg: 10227 / Max: 102981. (CC) gcc options: -pthread -O3 -march=native -lm

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 2Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0714212835SE +/- 0.06, N = 3SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.03, N = 3SE +/- 0.01, N = 325.1825.4727.7827.3927.1025.601. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 2Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0612182430Min: 25.12 / Avg: 25.18 / Max: 25.29Min: 25.41 / Avg: 25.47 / Max: 25.58Min: 27.72 / Avg: 27.78 / Max: 27.84Min: 27.27 / Avg: 27.39 / Max: 27.5Min: 27.05 / Avg: 27.1 / Max: 27.15Min: 25.58 / Avg: 25.6 / Max: 25.621. (CXX) g++ options: -O3 -fPIC -lm

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0400M800M1200M1600M2000MSE +/- 2255610.29, N = 3SE +/- 1331665.62, N = 3SE +/- 4864497.23, N = 3SE +/- 15763988.50, N = 3SE +/- 17297784.06, N = 3SE +/- 2130988.29, N = 31564833333157840000017219000001718000000167980000016096333331. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 32 - Buffer Length: 256 - Filter Length: 57Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0300M600M900M1200M1500MMin: 1560600000 / Avg: 1564833333.33 / Max: 1568300000Min: 1575800000 / Avg: 1578400000 / Max: 1580200000Min: 1714200000 / Avg: 1721900000 / Max: 1730900000Min: 1701100000 / Avg: 1718000000 / Max: 1749500000Min: 1648800000 / Avg: 1679800000 / Max: 1708600000Min: 1606300000 / Avg: 1609633333.33 / Max: 16136000001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 4096Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.05K10K15K20K25KSE +/- 348.10, N = 9SE +/- 220.77, N = 3SE +/- 106.49, N = 3SE +/- 538.47, N = 9SE +/- 160.97, N = 3SE +/- 349.17, N = 92279722913250682377424888231111. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 4096Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.04K8K12K16K20KMin: 21982 / Avg: 22796.78 / Max: 24994Min: 22605 / Avg: 22913 / Max: 23341Min: 24942 / Avg: 25068.33 / Max: 25280Min: 20902 / Avg: 23773.67 / Max: 25190Min: 24582 / Avg: 24887.67 / Max: 25128Min: 21893 / Avg: 23111.11 / Max: 247461. (CC) gcc options: -pthread -O3 -march=native -lm

SciMark

This test runs the ANSI C version of SciMark 2.0, which is a benchmark for scientific and numerical computing developed by programmers at the National Institute of Standards and Technology. This test is made up of Fast Foruier Transform, Jacobi Successive Over-relaxation, Monte Carlo, Sparse Matrix Multiply, and dense LU matrix factorization benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: Fast Fourier TransformClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.090180270360450SE +/- 0.46, N = 3SE +/- 0.67, N = 3SE +/- 0.66, N = 3SE +/- 0.25, N = 3SE +/- 1.03, N = 3SE +/- 0.70, N = 3363.85399.16384.03388.98388.88398.961. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: Fast Fourier TransformClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.070140210280350Min: 362.97 / Avg: 363.85 / Max: 364.55Min: 398.38 / Avg: 399.16 / Max: 400.5Min: 383.01 / Avg: 384.03 / Max: 385.27Min: 388.68 / Avg: 388.98 / Max: 389.47Min: 386.92 / Avg: 388.88 / Max: 390.39Min: 397.58 / Avg: 398.96 / Max: 399.891. (CC) gcc options: -O3 -march=native -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.120406080100SE +/- 1.07, N = 3SE +/- 0.51, N = 3SE +/- 0.89, N = 3SE +/- 0.65, N = 3SE +/- 0.47, N = 388.7886.0991.9793.0594.401. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.120406080100Min: 87.54 / Avg: 88.78 / Max: 90.92Min: 85.15 / Avg: 86.09 / Max: 86.92Min: 90.19 / Avg: 91.97 / Max: 92.97Min: 92.23 / Avg: 93.05 / Max: 94.33Min: 93.46 / Avg: 94.4 / Max: 94.941. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.03691215SE +/- 0.014, N = 3SE +/- 0.022, N = 3SE +/- 0.031, N = 3SE +/- 0.032, N = 3SE +/- 0.052, N = 3SE +/- 0.016, N = 39.5109.53610.39910.41710.2919.7251. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 6Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.03691215Min: 9.49 / Avg: 9.51 / Max: 9.54Min: 9.5 / Avg: 9.54 / Max: 9.58Min: 10.34 / Avg: 10.4 / Max: 10.44Min: 10.38 / Avg: 10.42 / Max: 10.48Min: 10.2 / Avg: 10.29 / Max: 10.38Min: 9.7 / Avg: 9.72 / Max: 9.751. (CXX) g++ options: -O3 -fPIC -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.030060090012001500SE +/- 3.61, N = 3SE +/- 7.11, N = 3SE +/- 3.05, N = 3SE +/- 1.75, N = 3SE +/- 1.97, N = 31307.491277.621358.561379.511259.59-fopenmp=libomp - MIN: 1293.38-fopenmp=libomp - MIN: 1252.39-fopenmp - MIN: 1337.17-fopenmp - MIN: 1361.6-fopenmp=libomp - MIN: 1247.291. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.02004006008001000Min: 1300.95 / Avg: 1307.49 / Max: 1313.42Min: 1265.25 / Avg: 1277.62 / Max: 1289.89Min: 1352.83 / Avg: 1358.56 / Max: 1363.24Min: 1376.47 / Avg: 1379.51 / Max: 1382.53Min: 1256.08 / Avg: 1259.59 / Max: 1262.891. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.030060090012001500SE +/- 3.92, N = 3SE +/- 9.46, N = 3SE +/- 4.44, N = 3SE +/- 3.72, N = 3SE +/- 5.94, N = 31302.701276.041357.291382.411267.18-fopenmp=libomp - MIN: 1289.86-fopenmp=libomp - MIN: 1249.65-fopenmp - MIN: 1335.63-fopenmp - MIN: 1360.58-fopenmp=libomp - MIN: 1248.351. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.02004006008001000Min: 1296.9 / Avg: 1302.7 / Max: 1310.16Min: 1257.62 / Avg: 1276.04 / Max: 1289.02Min: 1348.45 / Avg: 1357.29 / Max: 1362.54Min: 1375.93 / Avg: 1382.41 / Max: 1388.82Min: 1260.92 / Avg: 1267.18 / Max: 1279.061. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 0Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.01224364860SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 3SE +/- 0.07, N = 3SE +/- 0.06, N = 347.8847.8952.2251.4551.0348.131. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 0Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.01020304050Min: 47.82 / Avg: 47.88 / Max: 47.97Min: 47.81 / Avg: 47.89 / Max: 48.04Min: 52.08 / Avg: 52.22 / Max: 52.35Min: 51.37 / Avg: 51.45 / Max: 51.55Min: 50.94 / Avg: 51.03 / Max: 51.18Min: 48 / Avg: 48.13 / Max: 48.21. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.00.82331.64662.46993.29324.1165SE +/- 0.014, N = 3SE +/- 0.010, N = 3SE +/- 0.016, N = 3SE +/- 0.022, N = 3SE +/- 0.002, N = 3SE +/- 0.004, N = 33.3613.4293.6593.6433.6073.5431. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0246810Min: 3.35 / Avg: 3.36 / Max: 3.39Min: 3.41 / Avg: 3.43 / Max: 3.44Min: 3.63 / Avg: 3.66 / Max: 3.69Min: 3.61 / Avg: 3.64 / Max: 3.68Min: 3.6 / Avg: 3.61 / Max: 3.61Min: 3.54 / Avg: 3.54 / Max: 3.551. (CXX) g++ options: -O3 -fPIC -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1612182430SE +/- 0.27, N = 3SE +/- 0.13, N = 3SE +/- 0.13, N = 3SE +/- 0.25, N = 3SE +/- 0.28, N = 326.8526.6124.8426.4927.011. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1612182430Min: 26.44 / Avg: 26.85 / Max: 27.35Min: 26.37 / Avg: 26.61 / Max: 26.79Min: 24.64 / Avg: 24.84 / Max: 25.09Min: 26.11 / Avg: 26.49 / Max: 26.97Min: 26.64 / Avg: 27.01 / Max: 27.561. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Lossless CompressionClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.090180270360450SE +/- 0.49, N = 3SE +/- 0.17, N = 3SE +/- 1.92, N = 3SE +/- 3.10, N = 3SE +/- 0.39, N = 3374.04392.85388.95406.03382.991. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 100, Lossless CompressionClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.070140210280350Min: 373.2 / Avg: 374.04 / Max: 374.91Min: 392.57 / Avg: 392.85 / Max: 393.17Min: 385.64 / Avg: 388.95 / Max: 392.3Min: 402.91 / Avg: 406.03 / Max: 412.22Min: 382.21 / Avg: 382.99 / Max: 383.451. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 95, Compression Effort 7Clang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.050100150200250SE +/- 0.07, N = 3SE +/- 0.66, N = 3SE +/- 1.32, N = 3SE +/- 0.46, N = 3SE +/- 0.17, N = 3207.01203.63220.94215.57205.031. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 95, Compression Effort 7Clang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.04080120160200Min: 206.92 / Avg: 207.01 / Max: 207.15Min: 202.59 / Avg: 203.63 / Max: 204.87Min: 219.29 / Avg: 220.94 / Max: 223.56Min: 214.73 / Avg: 215.56 / Max: 216.32Min: 204.72 / Avg: 205.03 / Max: 205.31. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.030060090012001500SE +/- 1.78, N = 3SE +/- 9.75, N = 3SE +/- 4.57, N = 3SE +/- 2.65, N = 3SE +/- 0.58, N = 31305.101271.911356.911375.711268.08-fopenmp=libomp - MIN: 1294.76-fopenmp=libomp - MIN: 1252.33-fopenmp - MIN: 1335.04-fopenmp - MIN: 1355.68-fopenmp=libomp - MIN: 1257.351. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.02004006008001000Min: 1303.14 / Avg: 1305.1 / Max: 1308.65Min: 1261.78 / Avg: 1271.91 / Max: 1291.41Min: 1347.77 / Avg: 1356.91 / Max: 1361.6Min: 1370.85 / Avg: 1375.71 / Max: 1379.98Min: 1266.97 / Avg: 1268.08 / Max: 1268.91. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 75, Compression Effort 7Clang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0306090120150SE +/- 0.10, N = 3SE +/- 0.10, N = 3SE +/- 0.10, N = 3SE +/- 0.16, N = 3SE +/- 0.09, N = 3109.53109.64118.45116.66109.811. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: Quality 75, Compression Effort 7Clang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.020406080100Min: 109.34 / Avg: 109.52 / Max: 109.68Min: 109.47 / Avg: 109.64 / Max: 109.82Min: 118.24 / Avg: 118.45 / Max: 118.57Min: 116.33 / Avg: 116.66 / Max: 116.84Min: 109.63 / Avg: 109.81 / Max: 109.931. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedClang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.01224364860SE +/- 0.42, N = 3SE +/- 0.46, N = 3SE +/- 0.74, N = 3SE +/- 0.65, N = 5SE +/- 0.72, N = 4SE +/- 0.65, N = 3SE +/- 0.26, N = 348.5049.0148.4751.9752.3651.1750.321. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Compression SpeedClang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.01020304050Min: 47.66 / Avg: 48.5 / Max: 48.94Min: 48.1 / Avg: 49.01 / Max: 49.56Min: 47.71 / Avg: 48.47 / Max: 49.96Min: 50.61 / Avg: 51.97 / Max: 54.44Min: 50.88 / Avg: 52.36 / Max: 54.2Min: 50.04 / Avg: 51.17 / Max: 52.29Min: 49.79 / Avg: 50.32 / Max: 50.631. (CC) gcc options: -O3

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 2048Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.02K4K6K8K10KSE +/- 65.76, N = 3SE +/- 27.38, N = 3SE +/- 50.36, N = 3SE +/- 56.49, N = 3SE +/- 36.00, N = 3SE +/- 19.99, N = 37789.97878.58408.58134.58231.17784.81. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 2048Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.015003000450060007500Min: 7681.5 / Avg: 7789.87 / Max: 7908.6Min: 7840.6 / Avg: 7878.53 / Max: 7931.7Min: 8353.8 / Avg: 8408.5 / Max: 8509.1Min: 8022.5 / Avg: 8134.5 / Max: 8203.4Min: 8169.5 / Avg: 8231.13 / Max: 8294.2Min: 7745.6 / Avg: 7784.77 / Max: 7811.31. (CC) gcc options: -pthread -O3 -march=native -lm

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisClang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.020406080100SE +/- 0.98, N = 3SE +/- 0.98, N = 3SE +/- 1.09, N = 3SE +/- 0.16, N = 3SE +/- 1.29, N = 4SE +/- 0.33, N = 3SE +/- 0.26, N = 389.1288.6293.6389.1693.6689.4386.74-flto-mabm-mabm-mabm1. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -O3 -std=c99 -pedantic -march=native -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny AnalysisClang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.020406080100Min: 87.37 / Avg: 89.12 / Max: 90.78Min: 86.96 / Avg: 88.62 / Max: 90.35Min: 92.03 / Avg: 93.63 / Max: 95.71Min: 88.99 / Avg: 89.16 / Max: 89.48Min: 91.99 / Avg: 93.66 / Max: 97.46Min: 89.04 / Avg: 89.43 / Max: 90.08Min: 86.36 / Avg: 86.74 / Max: 87.231. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -O3 -std=c99 -pedantic -march=native -lm

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0150300450600750SE +/- 2.60, N = 3SE +/- 1.33, N = 3SE +/- 5.21, N = 3SE +/- 6.43, N = 37126657096896946601. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: RotateClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0130260390520650Min: 708 / Avg: 712.33 / Max: 717Min: 662 / Avg: 664.67 / Max: 666Min: 680 / Avg: 689.33 / Max: 698Min: 684 / Avg: 694 / Max: 7061. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0140280420560700SE +/- 3.01, N = 3SE +/- 5.55, N = 3SE +/- 3.83, N = 3SE +/- 2.42, N = 3SE +/- 5.75, N = 3SE +/- 3.03, N = 3643.58652.74605.50615.62611.73638.101. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0120240360480600Min: 637.62 / Avg: 643.58 / Max: 647.25Min: 641.71 / Avg: 652.74 / Max: 659.34Min: 600 / Avg: 605.5 / Max: 612.87Min: 611 / Avg: 615.62 / Max: 619.2Min: 600.6 / Avg: 611.73 / Max: 619.83Min: 632.24 / Avg: 638.1 / Max: 642.41. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

Ngspice

Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.020406080100SE +/- 1.11, N = 6SE +/- 1.37, N = 3SE +/- 0.60, N = 3SE +/- 0.12, N = 3SE +/- 0.43, N = 3SE +/- 0.12, N = 395.9690.5389.0990.4390.2691.99-lstdc++-lstdc++-lstdc++-lstdc++1. (CC) gcc options: -O3 -march=native -fopenmp -lm -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.orgSeconds, Fewer Is BetterNgspice 34Circuit: C7552Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.020406080100Min: 91.73 / Avg: 95.96 / Max: 97.88Min: 87.79 / Avg: 90.53 / Max: 92.04Min: 88.04 / Avg: 89.09 / Max: 90.13Min: 90.22 / Avg: 90.43 / Max: 90.62Min: 89.48 / Avg: 90.26 / Max: 90.95Min: 91.85 / Avg: 91.99 / Max: 92.221. (CC) gcc options: -O3 -march=native -fopenmp -lm -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1246810SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.03, N = 3SE +/- 0.00, N = 3SE +/- 0.01, N = 37.107.206.696.876.951. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.13691215Min: 7.04 / Avg: 7.1 / Max: 7.18Min: 7.18 / Avg: 7.2 / Max: 7.21Min: 6.64 / Avg: 6.69 / Max: 6.72Min: 6.86 / Avg: 6.87 / Max: 6.87Min: 6.94 / Avg: 6.95 / Max: 6.961. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.080160240320400SE +/- 1.56, N = 3SE +/- 3.43, N = 3SE +/- 1.20, N = 3SE +/- 1.54, N = 3SE +/- 1.51, N = 3SE +/- 1.09, N = 3345.30346.89322.42330.53329.32343.851. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.060120180240300Min: 342.27 / Avg: 345.3 / Max: 347.42Min: 340.14 / Avg: 346.89 / Max: 351.29Min: 320.34 / Avg: 322.42 / Max: 324.5Min: 327.69 / Avg: 330.53 / Max: 332.96Min: 327.15 / Avg: 329.32 / Max: 332.23Min: 342.66 / Avg: 343.85 / Max: 346.021. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMIClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.020406080100SE +/- 0.01, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 382.6479.1584.8679.1282.831. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMIClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.01632486480Min: 82.63 / Avg: 82.64 / Max: 82.65Min: 79.08 / Avg: 79.15 / Max: 79.26Min: 84.84 / Avg: 84.86 / Max: 84.89Min: 79.1 / Avg: 79.12 / Max: 79.13Min: 82.79 / Avg: 82.83 / Max: 82.861. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

POV-Ray

This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.03691215SE +/- 0.041, N = 3SE +/- 0.032, N = 3SE +/- 0.053, N = 3SE +/- 0.049, N = 3SE +/- 0.026, N = 39.2969.4089.9689.5709.4941. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSDL -lXpm -lSM -lICE -lX11 -lIlmImf -lImath -lHalf -lIex -lIexMath -lIlmThread -lpthread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
OpenBenchmarking.orgSeconds, Fewer Is BetterPOV-Ray 3.7.0.7Trace TimeClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.03691215Min: 9.22 / Avg: 9.3 / Max: 9.36Min: 9.36 / Avg: 9.41 / Max: 9.47Min: 9.86 / Avg: 9.97 / Max: 10.04Min: 9.52 / Avg: 9.57 / Max: 9.67Min: 9.44 / Avg: 9.49 / Max: 9.521. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSDL -lXpm -lSM -lICE -lX11 -lIlmImf -lImath -lHalf -lIex -lIexMath -lIlmThread -lpthread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 1024Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.011K22K33K44K55KSE +/- 952.64, N = 12SE +/- 585.78, N = 3SE +/- 788.42, N = 3SE +/- 439.64, N = 15SE +/- 568.96, N = 3SE +/- 621.84, N = 155035050740532755205451706496851. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 1D FFT Size 1024Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.09K18K27K36K45KMin: 40245 / Avg: 50350.08 / Max: 52560Min: 49569 / Avg: 50740 / Max: 51357Min: 52406 / Avg: 53275 / Max: 54849Min: 47241 / Avg: 52053.87 / Max: 53839Min: 50631 / Avg: 51705.67 / Max: 52567Min: 43915 / Avg: 49684.67 / Max: 518041. (CC) gcc options: -pthread -O3 -march=native -lm

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10, LosslessClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0246810SE +/- 0.013, N = 3SE +/- 0.011, N = 3SE +/- 0.022, N = 3SE +/- 0.007, N = 3SE +/- 0.017, N = 3SE +/- 0.022, N = 35.7465.8796.1316.1076.1495.9481. (CXX) g++ options: -O3 -fPIC -lm
OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.9.0Encoder Speed: 10, LosslessClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0246810Min: 5.72 / Avg: 5.75 / Max: 5.76Min: 5.87 / Avg: 5.88 / Max: 5.9Min: 6.11 / Avg: 6.13 / Max: 6.18Min: 6.09 / Avg: 6.11 / Max: 6.12Min: 6.13 / Avg: 6.15 / Max: 6.18Min: 5.92 / Avg: 5.95 / Max: 5.991. (CXX) g++ options: -O3 -fPIC -lm

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0918273645SE +/- 0.17, N = 3SE +/- 0.09, N = 3SE +/- 0.18, N = 3SE +/- 0.05, N = 3SE +/- 0.17, N = 3SE +/- 0.08, N = 341.0941.0138.4139.0338.8640.951. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0918273645Min: 40.79 / Avg: 41.09 / Max: 41.37Min: 40.9 / Avg: 41.01 / Max: 41.2Min: 38.07 / Avg: 38.41 / Max: 38.68Min: 38.92 / Avg: 39.03 / Max: 39.09Min: 38.54 / Avg: 38.86 / Max: 39.12Min: 40.8 / Avg: 40.95 / Max: 41.051. (CC) gcc options: -O3 -march=native -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.112K24K36K48K60KSE +/- 702.52, N = 15SE +/- 883.12, N = 3SE +/- 396.40, N = 3SE +/- 591.89, N = 7SE +/- 211.73, N = 356684544885382553019531021. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read WriteClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.110K20K30K40K50KMin: 53398.67 / Avg: 56683.93 / Max: 61036.19Min: 53558.74 / Avg: 54487.76 / Max: 56253.19Min: 53283.11 / Avg: 53825.4 / Max: 54597.39Min: 51146.43 / Avg: 53018.61 / Max: 55995.16Min: 52712.48 / Avg: 53101.62 / Max: 53440.831. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 7Clang 12.0Clang 11.0AMD AOCC 3.03691215SE +/- 0.05, N = 3SE +/- 0.02, N = 3SE +/- 0.08, N = 312.1512.0111.371. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 7Clang 12.0Clang 11.0AMD AOCC 3.048121620Min: 12.08 / Avg: 12.15 / Max: 12.25Min: 11.97 / Avg: 12.01 / Max: 12.05Min: 11.28 / Avg: 11.37 / Max: 11.521. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.11.06452.1293.19354.2585.3225SE +/- 0.054, N = 15SE +/- 0.074, N = 3SE +/- 0.034, N = 3SE +/- 0.052, N = 7SE +/- 0.021, N = 34.4314.6034.6574.7314.7221. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average LatencyClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1246810Min: 4.11 / Avg: 4.43 / Max: 4.69Min: 4.46 / Avg: 4.6 / Max: 4.68Min: 4.59 / Avg: 4.66 / Max: 4.7Min: 4.48 / Avg: 4.73 / Max: 4.9Min: 4.69 / Avg: 4.72 / Max: 4.761. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 5Clang 12.0Clang 11.0AMD AOCC 3.020406080100SE +/- 0.17, N = 3SE +/- 0.24, N = 3SE +/- 0.41, N = 374.2778.4179.231. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 5Clang 12.0Clang 11.0AMD AOCC 3.01530456075Min: 73.95 / Avg: 74.27 / Max: 74.55Min: 77.95 / Avg: 78.41 / Max: 78.74Min: 78.49 / Avg: 79.23 / Max: 79.921. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl

SciMark

This test runs the ANSI C version of SciMark 2.0, which is a benchmark for scientific and numerical computing developed by programmers at the National Institute of Standards and Technology. This test is made up of Fast Foruier Transform, Jacobi Successive Over-relaxation, Monte Carlo, Sparse Matrix Multiply, and dense LU matrix factorization benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: Monte CarloClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0150300450600750SE +/- 0.40, N = 3SE +/- 0.40, N = 3SE +/- 0.14, N = 3SE +/- 1.71, N = 3SE +/- 0.29, N = 3SE +/- 0.18, N = 3675.13674.86668.10682.87647.82690.941. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: Monte CarloClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0120240360480600Min: 674.49 / Avg: 675.13 / Max: 675.88Min: 674.22 / Avg: 674.86 / Max: 675.59Min: 667.84 / Avg: 668.1 / Max: 668.3Min: 680.64 / Avg: 682.87 / Max: 686.23Min: 647.39 / Avg: 647.82 / Max: 648.37Min: 690.58 / Avg: 690.94 / Max: 691.191. (CC) gcc options: -O3 -march=native -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.148121620SE +/- 0.11, N = 3SE +/- 0.11, N = 3SE +/- 0.05, N = 3SE +/- 0.05, N = 3SE +/- 0.08, N = 317.2217.1316.2917.0317.371. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.148121620Min: 17.08 / Avg: 17.22 / Max: 17.45Min: 16.91 / Avg: 17.13 / Max: 17.24Min: 16.21 / Avg: 16.29 / Max: 16.38Min: 16.94 / Avg: 17.03 / Max: 17.09Min: 17.27 / Avg: 17.37 / Max: 17.521. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

WebP2 Image Encode

This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: DefaultClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.00.65661.31321.96982.62643.283SE +/- 0.027, N = 3SE +/- 0.031, N = 3SE +/- 0.038, N = 3SE +/- 0.032, N = 7SE +/- 0.010, N = 32.7392.7432.7782.9182.8161. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.orgSeconds, Fewer Is BetterWebP2 Image Encode 20210126Encode Settings: DefaultClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0246810Min: 2.69 / Avg: 2.74 / Max: 2.78Min: 2.69 / Avg: 2.74 / Max: 2.8Min: 2.73 / Avg: 2.78 / Max: 2.85Min: 2.76 / Avg: 2.92 / Max: 3.01Min: 2.8 / Avg: 2.82 / Max: 2.841. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1918273645SE +/- 0.43, N = 3SE +/- 0.31, N = 3SE +/- 0.38, N = 3SE +/- 0.19, N = 3SE +/- 0.29, N = 338.1137.2839.1239.3239.711. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1816243240Min: 37.26 / Avg: 38.11 / Max: 38.55Min: 36.76 / Avg: 37.28 / Max: 37.83Min: 38.41 / Avg: 39.12 / Max: 39.73Min: 38.95 / Avg: 39.32 / Max: 39.58Min: 39.22 / Avg: 39.71 / Max: 40.221. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.13691215SE +/- 0.10, N = 3SE +/- 0.03, N = 3SE +/- 0.11, N = 6SE +/- 0.04, N = 3SE +/- 0.01, N = 38.999.149.579.109.411. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.13691215Min: 8.85 / Avg: 8.99 / Max: 9.19Min: 9.08 / Avg: 9.14 / Max: 9.2Min: 9.09 / Avg: 9.57 / Max: 9.8Min: 9.03 / Avg: 9.1 / Max: 9.17Min: 9.4 / Avg: 9.41 / Max: 9.421. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0714212835SE +/- 0.23, N = 3SE +/- 0.25, N = 3SE +/- 0.09, N = 3SE +/- 0.09, N = 3SE +/- 0.07, N = 3SE +/- 0.13, N = 330.3229.9428.9128.6028.7930.441. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4KClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0714212835Min: 30.04 / Avg: 30.32 / Max: 30.78Min: 29.64 / Avg: 29.94 / Max: 30.45Min: 28.75 / Avg: 28.91 / Max: 29.06Min: 28.43 / Avg: 28.6 / Max: 28.75Min: 28.67 / Avg: 28.79 / Max: 28.92Min: 30.26 / Avg: 30.44 / Max: 30.681. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1816243240SE +/- 0.48, N = 3SE +/- 0.22, N = 3SE +/- 0.12, N = 3SE +/- 0.19, N = 3SE +/- 0.47, N = 333.3933.1434.5635.2635.261. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1816243240Min: 32.44 / Avg: 33.39 / Max: 33.95Min: 32.7 / Avg: 33.14 / Max: 33.36Min: 34.34 / Avg: 34.56 / Max: 34.73Min: 34.98 / Avg: 35.26 / Max: 35.63Min: 34.57 / Avg: 35.26 / Max: 36.161. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.10.11930.23860.35790.47720.5965SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.530.530.500.520.521. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1246810Min: 0.53 / Avg: 0.53 / Max: 0.53Min: 0.53 / Avg: 0.53 / Max: 0.53Min: 0.5 / Avg: 0.5 / Max: 0.5Min: 0.52 / Avg: 0.52 / Max: 0.52Min: 0.52 / Avg: 0.52 / Max: 0.521. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Tachyon

This is a test of the threaded Tachyon, a parallel ray-tracing system, measuring the time to ray-trace a sample scene. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.048121620SE +/- 0.06, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 316.0516.4115.6816.1515.5016.061. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterTachyon 0.99b6Total TimeClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.048121620Min: 15.95 / Avg: 16.05 / Max: 16.15Min: 16.38 / Avg: 16.41 / Max: 16.45Min: 15.65 / Avg: 15.68 / Max: 15.72Min: 16.12 / Avg: 16.15 / Max: 16.17Min: 15.47 / Avg: 15.5 / Max: 15.52Min: 16.04 / Avg: 16.06 / Max: 16.071. (CC) gcc options: -m64 -O3 -fomit-frame-pointer -ffast-math -ltachyon -lm -lpthread

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedClang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.01224364860SE +/- 0.80, N = 3SE +/- 0.33, N = 3SE +/- 0.02, N = 3SE +/- 0.77, N = 4SE +/- 0.01, N = 3SE +/- 0.73, N = 4SE +/- 0.48, N = 352.0752.3550.9353.8352.8751.3253.771. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Compression SpeedClang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.01122334455Min: 50.59 / Avg: 52.07 / Max: 53.36Min: 51.96 / Avg: 52.35 / Max: 53Min: 50.91 / Avg: 50.93 / Max: 50.96Min: 52.22 / Avg: 53.83 / Max: 55.58Min: 52.85 / Avg: 52.87 / Max: 52.89Min: 49.16 / Avg: 51.32 / Max: 52.33Min: 53.05 / Avg: 53.77 / Max: 54.671. (CC) gcc options: -O3

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.080160240320400SE +/- 1.11, N = 3SE +/- 1.91, N = 3SE +/- 3.83, N = 3SE +/- 0.47, N = 3SE +/- 0.70, N = 3SE +/- 2.72, N = 3372.49373.99354.21364.12366.39373.891. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.070140210280350Min: 370.44 / Avg: 372.49 / Max: 374.27Min: 370.89 / Avg: 373.99 / Max: 377.47Min: 346.58 / Avg: 354.21 / Max: 358.59Min: 363.18 / Avg: 364.12 / Max: 364.65Min: 365.55 / Avg: 366.39 / Max: 367.79Min: 368.57 / Avg: 373.89 / Max: 377.511. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 64 - Buffer Length: 256 - Filter Length: 57Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0700M1400M2100M2800M3500MSE +/- 6045475.81, N = 3SE +/- 2452436.43, N = 3SE +/- 2961043.36, N = 3SE +/- 4643753.27, N = 3SE +/- 1154700.54, N = 3SE +/- 1234233.91, N = 33070633333305136666729404666672942866667298940000031004000001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 64 - Buffer Length: 256 - Filter Length: 57Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0500M1000M1500M2000M2500MMin: 3058800000 / Avg: 3070633333.33 / Max: 3078700000Min: 3046800000 / Avg: 3051366666.67 / Max: 3055200000Min: 2934900000 / Avg: 2940466666.67 / Max: 2945000000Min: 2936200000 / Avg: 2942866666.67 / Max: 2951800000Min: 2987400000 / Avg: 2989400000 / Max: 2991400000Min: 3098700000 / Avg: 3100400000 / Max: 31028000001. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read OnlyClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.15K10K15K20K25KSE +/- 303.43, N = 3SE +/- 289.16, N = 3SE +/- 41.57, N = 3SE +/- 118.05, N = 3SE +/- 281.76, N = 1524310249432389524845236611. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read OnlyClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.14K8K12K16K20KMin: 23860.82 / Avg: 24309.92 / Max: 24887.93Min: 24402.51 / Avg: 24942.89 / Max: 25391.5Min: 23820.07 / Avg: 23894.52 / Max: 23963.8Min: 24613.68 / Avg: 24844.55 / Max: 25002.74Min: 21416.76 / Avg: 23660.68 / Max: 25589.241. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0510152025SE +/- 0.02, N = 3SE +/- 0.13, N = 3SE +/- 0.13, N = 3SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.03, N = 319.0218.5719.3018.8818.3119.13-ltiff1. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, LosslessClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0510152025Min: 18.99 / Avg: 19.02 / Max: 19.06Min: 18.41 / Avg: 18.57 / Max: 18.83Min: 19.07 / Avg: 19.3 / Max: 19.51Min: 18.82 / Avg: 18.88 / Max: 18.98Min: 18.23 / Avg: 18.31 / Max: 18.36Min: 19.09 / Avg: 19.13 / Max: 19.191. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0110220330440550SE +/- 1.37, N = 3SE +/- 0.23, N = 3SE +/- 0.82, N = 3SE +/- 0.24, N = 3SE +/- 1.15, N = 3SE +/- 2.67, N = 3487.43481.05463.12472.61472.32476.951. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.090180270360450Min: 485.57 / Avg: 487.43 / Max: 490.1Min: 480.74 / Avg: 481.05 / Max: 481.51Min: 461.82 / Avg: 463.12 / Max: 464.64Min: 472.26 / Avg: 472.61 / Max: 473.08Min: 470.02 / Avg: 472.32 / Max: 473.64Min: 473.05 / Avg: 476.95 / Max: 482.071. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0110220330440550SE +/- 0.73, N = 3SE +/- 1.76, N = 3SE +/- 0.32, N = 3SE +/- 2.08, N = 3SE +/- 1.13, N = 3SE +/- 1.94, N = 3488.23482.02464.57477.67478.16478.621. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.090180270360450Min: 486.79 / Avg: 488.23 / Max: 489.18Min: 479.86 / Avg: 482.02 / Max: 485.5Min: 463.99 / Avg: 464.57 / Max: 465.08Min: 474.25 / Avg: 477.67 / Max: 481.44Min: 476.25 / Avg: 478.16 / Max: 480.16Min: 475.41 / Avg: 478.62 / Max: 482.121. (CC) gcc options: -O3 -fcommon -march=native -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average LatencyClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.10.00950.0190.02850.0380.0475SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.001, N = 150.0410.0400.0420.0400.0421. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Only - Average LatencyClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.112345Min: 0.04 / Avg: 0.04 / Max: 0.04Min: 0.04 / Avg: 0.04 / Max: 0.04Min: 0.04 / Avg: 0.04 / Max: 0.04Min: 0.04 / Avg: 0.04 / Max: 0.04Min: 0.04 / Avg: 0.04 / Max: 0.051. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.10.04730.09460.14190.18920.2365SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.210.210.200.210.211. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.112345Min: 0.21 / Avg: 0.21 / Max: 0.21Min: 0.21 / Avg: 0.21 / Max: 0.21Min: 0.2 / Avg: 0.2 / Max: 0.2Min: 0.21 / Avg: 0.21 / Max: 0.21Min: 0.21 / Avg: 0.21 / Max: 0.211. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMI - DecryptClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.020406080100SE +/- 0.06, N = 3SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 384.2380.2284.1381.4582.951. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMI - DecryptClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.01632486480Min: 84.12 / Avg: 84.23 / Max: 84.29Min: 80.17 / Avg: 80.22 / Max: 80.31Min: 84.11 / Avg: 84.13 / Max: 84.14Min: 81.45 / Avg: 81.45 / Max: 81.47Min: 82.92 / Avg: 82.95 / Max: 82.971. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.00.31430.62860.94291.25721.5715SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.002, N = 3SE +/- 0.000, N = 3SE +/- 0.002, N = 31.3311.3361.3971.3721.3861.351-ltiff1. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: DefaultClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0246810Min: 1.33 / Avg: 1.33 / Max: 1.33Min: 1.33 / Avg: 1.34 / Max: 1.34Min: 1.4 / Avg: 1.4 / Max: 1.4Min: 1.37 / Avg: 1.37 / Max: 1.37Min: 1.39 / Avg: 1.39 / Max: 1.39Min: 1.35 / Avg: 1.35 / Max: 1.351. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg

SciMark

This test runs the ANSI C version of SciMark 2.0, which is a benchmark for scientific and numerical computing developed by programmers at the National Institute of Standards and Technology. This test is made up of Fast Foruier Transform, Jacobi Successive Over-relaxation, Monte Carlo, Sparse Matrix Multiply, and dense LU matrix factorization benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: Dense LU Matrix FactorizationClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.02K4K6K8K10KSE +/- 7.16, N = 3SE +/- 77.81, N = 3SE +/- 28.39, N = 3SE +/- 33.93, N = 3SE +/- 25.06, N = 3SE +/- 0.22, N = 38848.409146.889178.979248.899263.559021.831. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: Dense LU Matrix FactorizationClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.016003200480064008000Min: 8837.66 / Avg: 8848.4 / Max: 8861.98Min: 8991.26 / Avg: 9146.88 / Max: 9225.1Min: 9149.58 / Avg: 9178.97 / Max: 9235.74Min: 9181.03 / Avg: 9248.89 / Max: 9282.97Min: 9236.12 / Avg: 9263.55 / Max: 9313.6Min: 9021.6 / Avg: 9021.83 / Max: 9022.281. (CC) gcc options: -O3 -march=native -lm

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Chimera 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.030060090012001500SE +/- 2.95, N = 3SE +/- 6.69, N = 3SE +/- 5.12, N = 3SE +/- 3.74, N = 3SE +/- 1.75, N = 3SE +/- 0.97, N = 31198.221190.411145.501171.041180.441188.43MIN: 700.24 / MAX: 1494.16-lm - MIN: 685.16 / MAX: 1496.36-lm - MIN: 664.19 / MAX: 1441.54-lm - MIN: 683.28 / MAX: 1473.51-lm - MIN: 680.31 / MAX: 1485.74-lm - MIN: 703.73 / MAX: 1484.941. (CC) gcc options: -O3 -march=native -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Chimera 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.02004006008001000Min: 1192.93 / Avg: 1198.22 / Max: 1203.12Min: 1178.82 / Avg: 1190.41 / Max: 1201.98Min: 1136.57 / Avg: 1145.5 / Max: 1154.32Min: 1163.79 / Avg: 1171.04 / Max: 1176.25Min: 1177.11 / Avg: 1180.44 / Max: 1183.05Min: 1186.48 / Avg: 1188.43 / Max: 1189.461. (CC) gcc options: -O3 -march=native -pthread

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256 - DecryptClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0306090120150SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.08, N = 3SE +/- 0.32, N = 3SE +/- 0.08, N = 3133.05127.74127.34127.78128.011. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256 - DecryptClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.020406080100Min: 133.03 / Avg: 133.05 / Max: 133.07Min: 127.72 / Avg: 127.74 / Max: 127.75Min: 127.19 / Avg: 127.34 / Max: 127.44Min: 127.15 / Avg: 127.78 / Max: 128.14Min: 127.84 / Avg: 128.01 / Max: 128.11. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256Clang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0306090120150SE +/- 0.02, N = 3SE +/- 0.02, N = 3SE +/- 0.09, N = 3SE +/- 0.33, N = 3SE +/- 0.06, N = 3132.82128.59127.30127.74127.771. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt
OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256Clang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.020406080100Min: 132.79 / Avg: 132.82 / Max: 132.85Min: 128.56 / Avg: 128.59 / Max: 128.61Min: 127.13 / Avg: 127.3 / Max: 127.39Min: 127.09 / Avg: 127.74 / Max: 128.12Min: 127.64 / Avg: 127.77 / Max: 127.831. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

SciMark

This test runs the ANSI C version of SciMark 2.0, which is a benchmark for scientific and numerical computing developed by programmers at the National Institute of Standards and Technology. This test is made up of Fast Foruier Transform, Jacobi Successive Over-relaxation, Monte Carlo, Sparse Matrix Multiply, and dense LU matrix factorization benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: CompositeClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.07001400210028003500SE +/- 1.11, N = 3SE +/- 15.12, N = 3SE +/- 5.86, N = 3SE +/- 6.50, N = 3SE +/- 5.19, N = 3SE +/- 1.29, N = 33190.623319.343229.223235.943182.353298.291. (CC) gcc options: -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterSciMark 2.0Computational Test: CompositeClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.06001200180024003000Min: 3188.61 / Avg: 3190.62 / Max: 3192.43Min: 3289.1 / Avg: 3319.34 / Max: 3335.12Min: 3222.66 / Avg: 3229.22 / Max: 3240.91Min: 3222.94 / Avg: 3235.94 / Max: 3242.66Min: 3176.72 / Avg: 3182.35 / Max: 3192.72Min: 3296.75 / Avg: 3298.29 / Max: 3300.851. (CC) gcc options: -O3 -march=native -lm

Gcrypt Library

Libgcrypt is a general purpose cryptographic library developed as part of the GnuPG project. This is a benchmark of libgcrypt's integrated benchmark and is measuring the time to run the benchmark command with a cipher/mac/hash repetition count set for 50 times as simple, high level look at the overall crypto performance of the system under test. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.9Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.050100150200250SE +/- 0.44, N = 3SE +/- 0.28, N = 3SE +/- 0.54, N = 3SE +/- 0.32, N = 3SE +/- 0.18, N = 3SE +/- 0.82, N = 3236.92240.21232.57231.24233.51240.411. (CC) gcc options: -O3 -march=native -fvisibility=hidden
OpenBenchmarking.orgSeconds, Fewer Is BetterGcrypt Library 1.9Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.04080120160200Min: 236.32 / Avg: 236.92 / Max: 237.78Min: 239.65 / Avg: 240.2 / Max: 240.54Min: 231.88 / Avg: 232.57 / Max: 233.64Min: 230.81 / Avg: 231.24 / Max: 231.85Min: 233.19 / Avg: 233.51 / Max: 233.8Min: 239.04 / Avg: 240.4 / Max: 241.891. (CC) gcc options: -O3 -march=native -fvisibility=hidden

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 4096Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.015003000450060007500SE +/- 35.20, N = 3SE +/- 60.67, N = 3SE +/- 30.40, N = 3SE +/- 25.90, N = 3SE +/- 23.67, N = 3SE +/- 65.81, N = 36744.16823.87007.36974.06948.26875.31. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Stock - Size: 2D FFT Size 4096Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.012002400360048006000Min: 6681.5 / Avg: 6744.13 / Max: 6803.3Min: 6721.1 / Avg: 6823.77 / Max: 6931.1Min: 6957.9 / Avg: 7007.3 / Max: 7062.7Min: 6940.1 / Avg: 6974.03 / Max: 7024.9Min: 6907 / Avg: 6948.23 / Max: 6989Min: 6805.5 / Avg: 6875.27 / Max: 7006.81. (CC) gcc options: -pthread -O3 -march=native -lm

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ExhaustiveClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0510152025SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 318.9919.0319.4819.4619.6218.911. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: ExhaustiveClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0510152025Min: 18.98 / Avg: 18.99 / Max: 19Min: 19.02 / Avg: 19.03 / Max: 19.04Min: 19.47 / Avg: 19.48 / Max: 19.49Min: 19.45 / Avg: 19.46 / Max: 19.47Min: 19.61 / Avg: 19.62 / Max: 19.63Min: 18.91 / Avg: 18.91 / Max: 18.921. (CXX) g++ options: -O3 -march=native -flto -pthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0918273645SE +/- 0.07, N = 3SE +/- 0.08, N = 3SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.07, N = 3SE +/- 0.05, N = 338.4537.7339.0738.5537.9538.34-ltiff1. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest CompressionClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0816243240Min: 38.32 / Avg: 38.45 / Max: 38.54Min: 37.64 / Avg: 37.73 / Max: 37.89Min: 39.04 / Avg: 39.07 / Max: 39.13Min: 38.47 / Avg: 38.55 / Max: 38.68Min: 37.82 / Avg: 37.95 / Max: 38.08Min: 38.29 / Avg: 38.34 / Max: 38.431. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.11.11382.22763.34144.45525.569SE +/- 0.04, N = 3SE +/- 0.07, N = 3SE +/- 0.01, N = 3SE +/- 0.04, N = 3SE +/- 0.06, N = 34.874.954.784.844.841. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1246810Min: 4.79 / Avg: 4.87 / Max: 4.94Min: 4.82 / Avg: 4.95 / Max: 5.02Min: 4.76 / Avg: 4.78 / Max: 4.79Min: 4.77 / Avg: 4.84 / Max: 4.92Min: 4.73 / Avg: 4.84 / Max: 4.921. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.00.51171.02341.53512.04682.5585SE +/- 0.001, N = 3SE +/- 0.000, N = 3SE +/- 0.005, N = 3SE +/- 0.010, N = 3SE +/- 0.007, N = 3SE +/- 0.001, N = 32.1992.2402.2732.2252.2742.262-ltiff1. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0246810Min: 2.2 / Avg: 2.2 / Max: 2.2Min: 2.24 / Avg: 2.24 / Max: 2.24Min: 2.27 / Avg: 2.27 / Max: 2.28Min: 2.22 / Avg: 2.23 / Max: 2.25Min: 2.26 / Avg: 2.27 / Max: 2.29Min: 2.26 / Avg: 2.26 / Max: 2.261. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -lpng16 -ljpeg

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 2048Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.07K14K21K28K35KSE +/- 77.17, N = 3SE +/- 146.10, N = 3SE +/- 37.37, N = 3SE +/- 14.99, N = 3SE +/- 209.56, N = 3SE +/- 378.89, N = 63193531741313413206131662310131. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 2048Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.06K12K18K24K30KMin: 31817 / Avg: 31934.67 / Max: 32080Min: 31454 / Avg: 31741.33 / Max: 31931Min: 31301 / Avg: 31341.33 / Max: 31416Min: 32038 / Avg: 32060.67 / Max: 32089Min: 31263 / Avg: 31661.67 / Max: 31973Min: 29488 / Avg: 31013.17 / Max: 320961. (CC) gcc options: -pthread -O3 -march=native -lm

simdjson

This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: KostyaClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.00.62331.24661.86992.49323.1165SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 32.752.682.752.772.731. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.orgGB/s, More Is Bettersimdjson 0.8.2Throughput Test: KostyaClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0246810Min: 2.74 / Avg: 2.75 / Max: 2.77Min: 2.68 / Avg: 2.68 / Max: 2.69Min: 2.75 / Avg: 2.75 / Max: 2.76Min: 2.77 / Avg: 2.77 / Max: 2.77Min: 2.73 / Avg: 2.73 / Max: 2.731. (CXX) g++ options: -O3 -march=native -pthread

AOM AV1

This is a test of the AOMedia AV1 encoder (libaom) developed by AOMedia and Google. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1510152025SE +/- 0.05, N = 3SE +/- 0.15, N = 3SE +/- 0.11, N = 3SE +/- 0.09, N = 3SE +/- 0.06, N = 322.1322.0021.4221.6422.111. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.0Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1510152025Min: 22.06 / Avg: 22.13 / Max: 22.23Min: 21.74 / Avg: 22 / Max: 22.26Min: 21.23 / Avg: 21.42 / Max: 21.6Min: 21.49 / Avg: 21.64 / Max: 21.8Min: 22.03 / Avg: 22.11 / Max: 22.231. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 8Clang 12.0Clang 11.0AMD AOCC 3.0714212835SE +/- 0.03, N = 3SE +/- 0.01, N = 3SE +/- 0.06, N = 328.1327.2427.291. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 8Clang 12.0Clang 11.0AMD AOCC 3.0612182430Min: 28.1 / Avg: 28.13 / Max: 28.19Min: 27.21 / Avg: 27.24 / Max: 27.25Min: 27.17 / Avg: 27.29 / Max: 27.381. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.10.02140.04280.06420.08560.107SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 3SE +/- 0.000, N = 30.0940.0940.0950.0930.0921. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average LatencyClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.112345Min: 0.09 / Avg: 0.09 / Max: 0.09Min: 0.09 / Avg: 0.09 / Max: 0.09Min: 0.09 / Avg: 0.09 / Max: 0.1Min: 0.09 / Avg: 0.09 / Max: 0.09Min: 0.09 / Avg: 0.09 / Max: 0.091. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1200K400K600K800K1000KSE +/- 720.87, N = 3SE +/- 1740.88, N = 3SE +/- 1623.23, N = 3SE +/- 183.22, N = 3SE +/- 1514.63, N = 3106902210693671057125107635710908241. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 100 - Mode: Read OnlyClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1200K400K600K800K1000KMin: 1068260.38 / Avg: 1069022.45 / Max: 1070463.39Min: 1066077.45 / Avg: 1069366.74 / Max: 1072000.03Min: 1055288.47 / Avg: 1057125.3 / Max: 1060361.94Min: 1076056.35 / Avg: 1076356.8 / Max: 1076688.71Min: 1087794.3 / Avg: 1090823.55 / Max: 1092339.791. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read WriteClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.17001400210028003500SE +/- 3.48, N = 3SE +/- 14.62, N = 3SE +/- 4.79, N = 3SE +/- 11.40, N = 3SE +/- 28.00, N = 3328133123298336933831. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read WriteClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.16001200180024003000Min: 3276.12 / Avg: 3280.86 / Max: 3287.64Min: 3283.68 / Avg: 3312.16 / Max: 3332.15Min: 3292.31 / Avg: 3298.46 / Max: 3307.91Min: 3346.68 / Avg: 3369.34 / Max: 3382.91Min: 3343.5 / Avg: 3383.09 / Max: 3437.191. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.01632486480SE +/- 0.49, N = 3SE +/- 0.49, N = 3SE +/- 0.26, N = 3SE +/- 0.32, N = 3SE +/- 0.56, N = 3SE +/- 0.63, N = 374.0073.3672.1472.6071.7973.511. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.01428425670Min: 73.09 / Avg: 74 / Max: 74.77Min: 72.87 / Avg: 73.36 / Max: 74.33Min: 71.66 / Avg: 72.14 / Max: 72.54Min: 72.06 / Avg: 72.6 / Max: 73.18Min: 70.78 / Avg: 71.79 / Max: 72.71Min: 72.37 / Avg: 73.51 / Max: 74.541. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average LatencyClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.10.06860.13720.20580.27440.343SE +/- 0.000, N = 3SE +/- 0.002, N = 3SE +/- 0.001, N = 3SE +/- 0.001, N = 3SE +/- 0.002, N = 30.3050.3020.3030.2970.2961. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 1 - Mode: Read Write - Average LatencyClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.112345Min: 0.3 / Avg: 0.3 / Max: 0.31Min: 0.3 / Avg: 0.3 / Max: 0.31Min: 0.3 / Avg: 0.3 / Max: 0.3Min: 0.3 / Avg: 0.3 / Max: 0.3Min: 0.29 / Avg: 0.3 / Max: 0.31. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

LZ4 Compression

This test measures the time needed to compress/decompress a sample file (an Ubuntu ISO) using LZ4 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedClang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.03K6K9K12K15KSE +/- 65.90, N = 3SE +/- 23.21, N = 3SE +/- 46.50, N = 3SE +/- 17.75, N = 5SE +/- 6.60, N = 4SE +/- 62.74, N = 3SE +/- 33.89, N = 313926.513927.913698.713895.313806.613857.413561.51. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 9 - Decompression SpeedClang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.02K4K6K8K10KMin: 13799.5 / Avg: 13926.5 / Max: 14020.5Min: 13900 / Avg: 13927.93 / Max: 13974Min: 13611.1 / Avg: 13698.67 / Max: 13769.6Min: 13873.9 / Avg: 13895.26 / Max: 13966.1Min: 13794.9 / Avg: 13806.6 / Max: 13822.9Min: 13788.4 / Avg: 13857.43 / Max: 13982.7Min: 13523.1 / Avg: 13561.53 / Max: 13629.11. (CC) gcc options: -O3

OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedClang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.03K6K9K12K15KSE +/- 71.01, N = 3SE +/- 15.91, N = 3SE +/- 60.82, N = 3SE +/- 37.19, N = 4SE +/- 42.32, N = 3SE +/- 34.44, N = 4SE +/- 73.30, N = 313911.513840.313715.013793.413906.113882.213562.51. (CC) gcc options: -O3
OpenBenchmarking.orgMB/s, More Is BetterLZ4 Compression 1.9.3Compression Level: 3 - Decompression SpeedClang 12.0Clang 11.0Clang 12.0 LTOGCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.02K4K6K8K10KMin: 13769.6 / Avg: 13911.53 / Max: 13986.7Min: 13808.6 / Avg: 13840.3 / Max: 13858.5Min: 13611.3 / Avg: 13715.03 / Max: 13821.9Min: 13686.9 / Avg: 13793.38 / Max: 13847.8Min: 13835.7 / Avg: 13906.07 / Max: 13982Min: 13782.4 / Avg: 13882.18 / Max: 13932.6Min: 13481.3 / Avg: 13562.5 / Max: 13708.81. (CC) gcc options: -O3

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1246810SE +/- 0.013, N = 5SE +/- 0.002, N = 5SE +/- 0.002, N = 5SE +/- 0.003, N = 5SE +/- 0.002, N = 57.5677.3927.5047.4697.381-fvisibility=hidden-fvisibility=hidden-fvisibility=hidden1. (CXX) g++ options: -O3 -march=native -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus EncodeClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.13691215Min: 7.55 / Avg: 7.57 / Max: 7.62Min: 7.39 / Avg: 7.39 / Max: 7.4Min: 7.5 / Avg: 7.5 / Max: 7.51Min: 7.46 / Avg: 7.47 / Max: 7.48Min: 7.38 / Avg: 7.38 / Max: 7.391. (CXX) g++ options: -O3 -march=native -logg -lm

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 8Clang 12.0Clang 11.0AMD AOCC 3.00.18450.3690.55350.7380.9225SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 30.820.800.811. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: PNG - Encode Speed: 8Clang 12.0Clang 11.0AMD AOCC 3.0246810Min: 0.82 / Avg: 0.82 / Max: 0.82Min: 0.8 / Avg: 0.8 / Max: 0.8Min: 0.81 / Avg: 0.81 / Max: 0.811. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 4KClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0120240360480600SE +/- 1.79, N = 3SE +/- 1.43, N = 3SE +/- 1.35, N = 3SE +/- 0.67, N = 3SE +/- 2.51, N = 3SE +/- 1.13, N = 3541.56543.43530.82536.71538.28541.58MIN: 252.01 / MAX: 587.53-lm - MIN: 256.75 / MAX: 593.99-lm - MIN: 248.84 / MAX: 574.28-lm - MIN: 256.44 / MAX: 577.82-lm - MIN: 251.6 / MAX: 584.38-lm - MIN: 259.4 / MAX: 585.81. (CC) gcc options: -O3 -march=native -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 4KClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0100200300400500Min: 539.28 / Avg: 541.56 / Max: 545.09Min: 541.24 / Avg: 543.43 / Max: 546.12Min: 528.12 / Avg: 530.82 / Max: 532.27Min: 535.38 / Avg: 536.71 / Max: 537.45Min: 534.26 / Avg: 538.28 / Max: 542.89Min: 539.33 / Avg: 541.58 / Max: 542.811. (CC) gcc options: -O3 -march=native -pthread

PostgreSQL pgbench

This is a benchmark of PostgreSQL using pgbench for facilitating the database benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1200K400K600K800K1000KSE +/- 6289.60, N = 3SE +/- 13844.42, N = 3SE +/- 8843.08, N = 3SE +/- 8859.63, N = 3SE +/- 8885.95, N = 3107120910655061067486108973110901601. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgTPS, More Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read OnlyClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1200K400K600K800K1000KMin: 1058631.67 / Avg: 1071209 / Max: 1077685.31Min: 1037871.76 / Avg: 1065506.4 / Max: 1080823.27Min: 1054670.11 / Avg: 1067485.98 / Max: 1084449.25Min: 1073584.71 / Avg: 1089731.33 / Max: 1104124.78Min: 1073348.97 / Avg: 1090159.83 / Max: 1103557.931. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.10.05290.10580.15870.21160.2645SE +/- 0.001, N = 3SE +/- 0.003, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.002, N = 30.2340.2350.2350.2300.2301. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.orgms, Fewer Is BetterPostgreSQL pgbench 13.0Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average LatencyClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.112345Min: 0.23 / Avg: 0.23 / Max: 0.24Min: 0.23 / Avg: 0.24 / Max: 0.24Min: 0.23 / Avg: 0.24 / Max: 0.24Min: 0.23 / Avg: 0.23 / Max: 0.23Min: 0.23 / Avg: 0.23 / Max: 0.231. (CC) gcc options: -fno-strict-aliasing -fwrapv -O3 -march=native -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.030060090012001500SE +/- 7.87, N = 3SE +/- 2.13, N = 3SE +/- 2.25, N = 3SE +/- 8.15, N = 3SE +/- 1.96, N = 3SE +/- 4.95, N = 31244.111251.251228.631245.111249.741251.91MIN: 549.81 / MAX: 1390.03-lm - MIN: 556.46 / MAX: 1394.06-lm - MIN: 555.28 / MAX: 1361.68-lm - MIN: 539.07 / MAX: 1398.87-lm - MIN: 559.74 / MAX: 1387.11-lm - MIN: 543.89 / MAX: 1394.161. (CC) gcc options: -O3 -march=native -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 1080pClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.02004006008001000Min: 1228.6 / Avg: 1244.11 / Max: 1254.19Min: 1247.87 / Avg: 1251.25 / Max: 1255.2Min: 1224.26 / Avg: 1228.63 / Max: 1231.72Min: 1229.75 / Avg: 1245.11 / Max: 1257.5Min: 1246.09 / Avg: 1249.74 / Max: 1252.81Min: 1245.26 / Avg: 1251.91 / Max: 1261.61. (CC) gcc options: -O3 -march=native -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.00.1770.3540.5310.7080.885SE +/- 0.004246, N = 3SE +/- 0.001200, N = 3SE +/- 0.002405, N = 3SE +/- 0.002532, N = 3SE +/- 0.001713, N = 30.7797760.7791010.7867620.7824760.773233-fopenmp=libomp - MIN: 0.73-fopenmp=libomp - MIN: 0.73-fopenmp - MIN: 0.75-fopenmp - MIN: 0.73-fopenmp=libomp - MIN: 0.721. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0246810Min: 0.77 / Avg: 0.78 / Max: 0.79Min: 0.78 / Avg: 0.78 / Max: 0.78Min: 0.78 / Avg: 0.79 / Max: 0.79Min: 0.78 / Avg: 0.78 / Max: 0.79Min: 0.77 / Avg: 0.77 / Max: 0.781. (CXX) g++ options: -O3 -march=native -std=c++11 -msse4.1 -fPIC -pie -lpthread -ldl

FFTW

FFTW is a C subroutine library for computing the discrete Fourier transform (DFT) in one or more dimensions. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 1024Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.08K16K24K32K40KSE +/- 165.99, N = 3SE +/- 530.09, N = 4SE +/- 79.87, N = 3SE +/- 301.69, N = 3SE +/- 442.82, N = 3SE +/- 455.21, N = 123623936181363213597335718361001. (CC) gcc options: -pthread -O3 -march=native -lm
OpenBenchmarking.orgMflops, More Is BetterFFTW 3.3.6Build: Float + SSE - Size: 2D FFT Size 1024Clang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.06K12K18K24K30KMin: 36061 / Avg: 36239.33 / Max: 36571Min: 34793 / Avg: 36180.5 / Max: 37160Min: 36164 / Avg: 36321 / Max: 36425Min: 35446 / Avg: 35972.67 / Max: 36491Min: 35061 / Avg: 35718.33 / Max: 36561Min: 31254 / Avg: 36099.92 / Max: 370191. (CC) gcc options: -pthread -O3 -march=native -lm

JPEG XL

The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 5Clang 12.0Clang 11.0AMD AOCC 3.01530456075SE +/- 0.14, N = 3SE +/- 0.20, N = 3SE +/- 0.17, N = 366.6665.5865.571. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 5Clang 12.0Clang 11.0AMD AOCC 3.01326395265Min: 66.45 / Avg: 66.66 / Max: 66.93Min: 65.32 / Avg: 65.58 / Max: 65.98Min: 65.29 / Avg: 65.57 / Max: 65.871. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl

OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 7Clang 12.0Clang 11.0AMD AOCC 3.01530456075SE +/- 0.16, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 366.3865.4365.681. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl
OpenBenchmarking.orgMP/s, More Is BetterJPEG XL 0.3.3Input: JPEG - Encode Speed: 7Clang 12.0Clang 11.0AMD AOCC 3.01326395265Min: 66.07 / Avg: 66.38 / Max: 66.63Min: 65.29 / Avg: 65.43 / Max: 65.58Min: 65.6 / Avg: 65.68 / Max: 65.781. (CXX) g++ options: -O3 -march=native -funwind-tables -Xclang -mrelax-all -O2 -fPIE -pie -pthread -ldl

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.012002400360048006000SE +/- 126.29, N = 12SE +/- 169.87, N = 9SE +/- 2.40, N = 3SE +/- 17.50, N = 3SE +/- 174.98, N = 1244564523518355594383-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: super-resolution-10 - Device: OpenMP CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.010002000300040005000Min: 3951 / Avg: 4456.21 / Max: 5216Min: 3843 / Avg: 4523.33 / Max: 5208.5Min: 5179.5 / Avg: 5182.83 / Max: 5187.5Min: 5525.5 / Avg: 5559 / Max: 5584.5Min: 3858 / Avg: 4382.71 / Max: 57401. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.0110220330440550SE +/- 10.30, N = 12SE +/- 5.55, N = 3SE +/- 4.64, N = 12SE +/- 0.87, N = 3SE +/- 10.39, N = 12498471495505459-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.6Model: bertsquad-10 - Device: OpenMP CPUClang 12.0Clang 11.0GCC 9.3GCC 10.3AMD AOCC 3.090180270360450Min: 460 / Avg: 497.67 / Max: 552Min: 462 / Avg: 470.67 / Max: 481Min: 450.5 / Avg: 494.88 / Max: 507.5Min: 503.5 / Avg: 505 / Max: 506.5Min: 426 / Avg: 459.17 / Max: 5261. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -ldl -lrt

ViennaCL

ViennaCL is an open-source linear algebra library written in C++ and with support for OpenCL and OpenMP. This test profile makes use of ViennaCL's built-in benchmarks. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-NClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.01530456075SE +/- 2.22, N = 12SE +/- 3.65, N = 15SE +/- 4.17, N = 15SE +/- 5.30, N = 12SE +/- 3.83, N = 15SE +/- 3.44, N = 1269.151.265.056.263.955.2-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - dGEMV-NClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.01326395265Min: 53.6 / Avg: 69.08 / Max: 79.9Min: 36.9 / Avg: 51.23 / Max: 97.9Min: 40.9 / Avg: 64.95 / Max: 94.5Min: 10 / Avg: 56.23 / Max: 92.3Min: 41.4 / Avg: 63.95 / Max: 104Min: 37 / Avg: 55.23 / Max: 79.11. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sDOTClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0140280420560700SE +/- 35.24, N = 12SE +/- 38.96, N = 15SE +/- 0.80, N = 15SE +/- 53.43, N = 12SE +/- 2.60, N = 15SE +/- 37.59, N = 12434.00462.00636.00592.97649.00477.00-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sDOTClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.0110220330440550Min: 250 / Avg: 433.83 / Max: 598Min: 214 / Avg: 461.93 / Max: 597Min: 632 / Avg: 636.33 / Max: 646Min: 5.67 / Avg: 592.97 / Max: 658Min: 632 / Avg: 649.33 / Max: 665Min: 296 / Avg: 477.08 / Max: 6691. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sAXPYClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.030060090012001500SE +/- 15.69, N = 12SE +/- 34.43, N = 15SE +/- 2.85, N = 15SE +/- 132.58, N = 12SE +/- 62.40, N = 15SE +/- 26.90, N = 12357.0412.0813.01350.01496.0326.0-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sAXPYClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.030060090012001500Min: 285 / Avg: 357 / Max: 482Min: 278 / Avg: 411.93 / Max: 675Min: 796 / Avg: 812.67 / Max: 838Min: 10.5 / Avg: 1350.04 / Max: 1920Min: 1100 / Avg: 1496 / Max: 1890Min: 247 / Avg: 326.17 / Max: 4941. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL

OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sCOPYClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.030060090012001500SE +/- 15.30, N = 12SE +/- 36.50, N = 15SE +/- 25.85, N = 15SE +/- 101.07, N = 12SE +/- 25.34, N = 15SE +/- 32.29, N = 12471.00495.001217.001065.601210.00531.00-fopenmp=libomp-fopenmp=libomp-fopenmp-fopenmp-fopenmp-fopenmp=libomp1. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL
OpenBenchmarking.orgGB/s, More Is BetterViennaCL 1.7.1Test: CPU BLAS - sCOPYClang 12.0Clang 11.0GCC 9.3GCC 10.3GCC 11.0.1AMD AOCC 3.02004006008001000Min: 339 / Avg: 471.17 / Max: 531Min: 256 / Avg: 494.8 / Max: 816Min: 919 / Avg: 1217.27 / Max: 1280Min: 7.15 / Avg: 1065.6 / Max: 1270Min: 970 / Avg: 1210 / Max: 1280Min: 379 / Avg: 530.75 / Max: 8331. (CXX) g++ options: -O3 -march=native -rdynamic -lOpenCL

174 Results Shown

oneDNN
ViennaCL:
  CPU BLAS - dCOPY
  CPU BLAS - dAXPY
Etcpak
ViennaCL:
  CPU BLAS - dGEMM-NN
  CPU BLAS - dGEMM-TN
dav1d
GraphicsMagick
Botan
C-Ray
Botan
oneDNN
LibRaw
FinanceBench
oneDNN
ViennaCL:
  CPU BLAS - dGEMM-NT
  CPU BLAS - dDOT
SVT-AV1
toyBrot Fractal Generator
Etcpak
toyBrot Fractal Generator:
  TBB
  OpenMP
  C++ Tasks
ViennaCL
SciMark
Botan
GraphicsMagick
oneDNN:
  Deconvolution Batch shapes_3d - f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
GraphicsMagick
oneDNN
FinanceBench
SVT-AV1
oneDNN
ViennaCL
SVT-AV1
Coremark
ASTC Encoder
oneDNN
FFTW
Liquid-DSP
oneDNN:
  Recurrent Neural Network Inference - f32 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
SciMark
GraphicsMagick
ONNX Runtime
Botan
Etcpak
Botan
ASTC Encoder
FLAC Audio Encoding
Botan
LAME MP3 Encoding
TSCP
GraphicsMagick
Ngspice
simdjson
QuantLib
simdjson:
  DistinctUserID
  LargeRand
ONNX Runtime
libavif avifenc
FFTW
oneDNN
FFTW
Botan
FFTW
oneDNN
WebP Image Encode
ONNX Runtime
GraphicsMagick
Liquid-DSP
Botan
oneDNN
FFTW:
  Stock - 1D FFT Size 4096
  Stock - 2D FFT Size 1024
SecureMark
AOM AV1
WebP2 Image Encode
FFTW
PostgreSQL pgbench:
  100 - 100 - Read Write - Average Latency
  100 - 100 - Read Write
FFTW
libavif avifenc
Liquid-DSP
FFTW
SciMark
AOM AV1
libavif avifenc
oneDNN:
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
libavif avifenc:
  0
  10
AOM AV1
WebP2 Image Encode:
  Quality 100, Lossless Compression
  Quality 95, Compression Effort 7
oneDNN
WebP2 Image Encode
LZ4 Compression
FFTW
Timed MrBayes Analysis
GraphicsMagick
SVT-HEVC
Ngspice
AOM AV1
SVT-HEVC
Botan
POV-Ray
FFTW
libavif avifenc
SVT-HEVC
PostgreSQL pgbench
JPEG XL
PostgreSQL pgbench
JPEG XL
SciMark
AOM AV1
WebP2 Image Encode
AOM AV1:
  Speed 9 Realtime - Bosphorus 4K
  Speed 6 Two-Pass - Bosphorus 4K
x265
AOM AV1:
  Speed 8 Realtime - Bosphorus 4K
  Speed 0 Two-Pass - Bosphorus 1080p
Tachyon
LZ4 Compression
SVT-VP9
Liquid-DSP
PostgreSQL pgbench
WebP Image Encode
SVT-VP9:
  VMAF Optimized - Bosphorus 1080p
  PSNR/SSIM Optimized - Bosphorus 1080p
PostgreSQL pgbench
AOM AV1
Botan
WebP Image Encode
SciMark
dav1d
Botan:
  CAST-256 - Decrypt
  CAST-256
SciMark
Gcrypt Library
FFTW
ASTC Encoder
WebP Image Encode
AOM AV1
WebP Image Encode
FFTW
simdjson
AOM AV1
JPEG XL
PostgreSQL pgbench:
  100 - 100 - Read Only - Average Latency
  100 - 100 - Read Only
  100 - 1 - Read Write
x265
PostgreSQL pgbench
LZ4 Compression:
  9 - Decompression Speed
  3 - Decompression Speed
Opus Codec Encoding
JPEG XL
dav1d
PostgreSQL pgbench:
  100 - 250 - Read Only
  100 - 250 - Read Only - Average Latency
dav1d
oneDNN
FFTW
JPEG XL:
  JPEG - 5
  JPEG - 7
ONNX Runtime:
  super-resolution-10 - OpenMP CPU
  bertsquad-10 - OpenMP CPU
ViennaCL:
  CPU BLAS - dGEMV-N
  CPU BLAS - sDOT
  CPU BLAS - sAXPY
  CPU BLAS - sCOPY