11900K Compiler

Intel Core i9-11900K testing with a ASUS ROG MAXIMUS XIII HERO (0707 BIOS) and AMD Radeon VII 16GB on Fedora 34 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2105179-IB-11900KCOM00
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Audio Encoding 4 Tests
Bioinformatics 3 Tests
C/C++ Compiler Tests 16 Tests
CPU Massive 18 Tests
Creator Workloads 17 Tests
Database Test Suite 2 Tests
Encoding 8 Tests
Fortran Tests 2 Tests
HPC - High Performance Computing 8 Tests
Imaging 3 Tests
Machine Learning 3 Tests
MPI Benchmarks 2 Tests
Multi-Core 14 Tests
OpenMPI Tests 4 Tests
Programmer / Developer System Benchmarks 2 Tests
Renderers 3 Tests
Scientific Computing 5 Tests
Server 2 Tests
Server CPU Tests 12 Tests
Single-Threaded 6 Tests
Telephony 2 Tests
Video Encoding 4 Tests
Common Workstation Benchmarks 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs
On Line Graphs With Missing Data, Connect The Line Gaps

Multi-Way Comparison

Condense Comparison
Transpose Comparison

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
GCC 11.1: -O3 -march=native
May 16 2021
  4 Hours, 59 Minutes
GCC 11.1: -O3 -march=native -flto
May 16 2021
  4 Hours, 52 Minutes
GCC 11.1: -O2
May 17 2021
  7 Hours, 23 Minutes
Invert Hiding All Results Option
  5 Hours, 45 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


11900K CompilerOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-11900K @ 5.10GHz (8 Cores / 16 Threads)ASUS ROG MAXIMUS XIII HERO (0707 BIOS)Intel Tiger Lake-H32GB500GB Western Digital WDS500G3X0C-00SJG0 + 15GB Ultra USB 3.0AMD Radeon VII 16GB (1801/1000MHz)Intel Tiger Lake-H HD AudioASUS MG28U2 x Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411Fedora 345.11.20-300.fc34.x86_64 (x86_64)GNOME Shell 40.1X Server + Wayland4.6 Mesa 21.0.3 (LLVM 12.0.0)GCC 11.1.1 20210428btrfs3840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen Resolution11900K Compiler BenchmarksSystem Logs- Transparent Huge Pages: madvise- GCC 11.1: -O3 -march=native: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native" - GCC 11.1: -O3 -march=native -flto: CXXFLAGS="-O3 -march=native -flto" CFLAGS="-O3 -march=native -flto" - GCC 11.1: -O2: CXXFLAGS=-O2 CFLAGS=-O2- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x3c - Thermald 2.4.1- SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

c-ray: Total Time - 4K, 16 Rays Per Pixelncnn: CPU - shufflenet-v2dav1d: Chimera 1080p 10-bitncnn: CPU - mnasnetencode-mp3: WAV To MP3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetncnn: CPU-v3-v3 - mobilenet-v3graphics-magick: Enhancedncnn: CPU - efficientnet-b0ncnn: CPU - resnet50graphics-magick: Sharpenencode-opus: WAV To Opus Encodencnn: CPU - yolov4-tinyaobench: 2048 x 2048 - Total Timegraphics-magick: Resizinghimeno: Poisson Pressure Solverncnn: CPU - regnety_400mncnn: CPU - googlenetwebp: Quality 100, Losslessliquid-dsp: 8 - 256 - 57tnn: CPU - MobileNet v2onednn: IP Shapes 3D - bf16bf16bf16 - CPUgraphics-magick: Rotateastcenc: Exhaustivetnn: CPU - SqueezeNet v1.1astcenc: Thoroughespeak: Text-To-Speech Synthesisonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUcompress-zstd: 8, Long Mode - Compression Speedcompress-zstd: 8, Long Mode - Decompression Speedwebp: Quality 100, Highest Compressioncompress-zstd: 19 - Decompression Speedtjbench: Decompression Throughputonednn: IP Shapes 3D - f32 - CPUsmallpt: Global Illumination Renderer; 128 Samplescompress-zstd: 19, Long Mode - Decompression Speedsvt-hevc: 7 - Bosphorus 1080pncnn: CPU - squeezenet_ssdlammps: Rhodopsin Proteinsvt-vp9: Visual Quality Optimized - Bosphorus 1080phmmer: Pfam Database Searchstockfish: Total Timewebp: Quality 100, Lossless, Highest Compressionncnn: CPU - resnet18mrbayes: Primate Phylogeny Analysisonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUx265: Bosphorus 4Kencode-flac: WAV To FLACcompress-zstd: 19 - Compression Speedonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUsvt-vp9: VMAF Optimized - Bosphorus 1080ppjsip: INVITEdav1d: Summer Nature 4Ksvt-hevc: 10 - Bosphorus 1080psvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080predis: SETliquid-dsp: 16 - 256 - 57onednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUqe: AUSURF112onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUastcenc: Mediumdav1d: Summer Nature 1080pdav1d: Chimera 1080pcoremark: CoreMark Size 666 - Iterations Per Secondonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUncnn: CPU - vgg16onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUsqlite-speedtest: Timed Time - Size 1,000compress-zstd: 19, Long Mode - Compression Speedonednn: Convolution Batch Shapes Auto - f32 - CPUncnn: CPU - alexnetpjsip: OPTIONS, Statelessonednn: IP Shapes 1D - u8s8f32 - CPUcryptopp: Unkeyed Algorithmsredis: GETonednn: IP Shapes 1D - f32 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUencode-wavpack: WAV To WavPackonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUpjsip: OPTIONS, Statefulonednn: Matrix Multiply Batch Shapes Transformer - bf16bf16bf16 - CPUsysbench: CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUgmpbench: Total Timencnn: CPU - blazefaceGCC 11.1 -O3 -march=native -O3 -march=native -flto -O247.3453.24223.022.305.4793.2411.832.552704.3818.231955.58720.2321.54411986878.5076868.6210.2012.901686530000230.0195.28726114185.4157227.66311.384621.70516.5026285.35546.05.1274514.8273.10004611.23958.4054582.3139.1315.538.067164.77100.7372993244127.26411.0886.6964.865221891.7115.815.93135.41887.611890.59195.874959190.31278.72201.702980192.007228933333173.473172.192576.973171.465.1820717.31763.05432583.9643524.3079812.514254.501.457883.1702644.08533.014.28899.632414390.722430489.7595514036791.924.0661717.05763.527910.82963711.0841.3227193893.5370834776.0816.18288.575486172.91.1947.6135.602.275.3763.2513.342.522694.3218.431955.57523.4521.57712297079.8838708.9110.2712.706684356667247.8895.40080107285.4207242.55011.395222.60417.4188281.15477.95.1034503.1272.60075811.22698.4544579.8141.8315.928.328166.0599.9712908639427.07411.3984.9294.746111877.5115.405.93634.81876.251874.70195.075058278.59201.102990164.927223933333152.893148.672540.193154.695.1705435901.4439594.2517612.523254.131.475243.1394143.77732.814.25239.702398920.720482488.627804060369.084.0448117.12893.523810.83169911.0991.3231193953.5350034751.0116.18648.572486171.61.69106.5223.48148.403.117.3044.2015.153.182195.2322.071646.46721.0724.45810916305.4818509.6111.1113.763635506667243.4165.01199106691.3799236.05012.094921.32516.6931296.05760.95.3604718.1261.03478510.74568.7714777.3136.3116.158.023160.65103.2912909481927.84111.3087.2974.876011841.6315.646.08634.51842.141845.74191.835001186.75273.60198.012936296.087113433333124.563123.642538.253123.955.2481727.60773.93430127.4981894.2707712.365954.801.467263.1353243.61632.714.16619.632397920.717882491.6449324051463.174.0447717.05683.533150.82956411.0771.3210093813.5401934799.7016.17428.576231.19OpenBenchmarking.org

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per Pixel-O3 -march=native-O3 -march=native -flto-O220406080100SE +/- 0.15, N = 3SE +/- 0.16, N = 3SE +/- 0.05, N = 347.3547.61106.52-march=native-march=native -flto-O21. (CC) gcc options: -lm -lpthread -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per Pixel-O3 -march=native-O3 -march=native -flto-O220406080100Min: 47.06 / Avg: 47.35 / Max: 47.54Min: 47.31 / Avg: 47.61 / Max: 47.82Min: 106.43 / Avg: 106.52 / Max: 106.591. (CC) gcc options: -lm -lpthread -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2-O3 -march=native-O3 -march=native -flto-O21.262.523.785.046.3SE +/- 0.02, N = 3SE +/- 0.06, N = 3SE +/- 0.00, N = 153.245.603.48-O3 -march=native - MIN: 3.17 / MAX: 6.75-O3 -march=native -flto - MIN: 5.41 / MAX: 9.21MIN: 3.4 / MAX: 7.051. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2-O3 -march=native-O3 -march=native -flto-O2246810Min: 3.21 / Avg: 3.24 / Max: 3.28Min: 5.48 / Avg: 5.6 / Max: 5.69Min: 3.44 / Avg: 3.48 / Max: 3.511. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Chimera 1080p 10-bit-O3 -march=native-O250100150200250SE +/- 0.03, N = 3SE +/- 0.09, N = 3223.02148.40-O3 -march=native - MIN: 153.51 / MAX: 490.73-O2 -lm - MIN: 95.23 / MAX: 345.291. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Chimera 1080p 10-bit-O3 -march=native-O24080120160200Min: 222.96 / Avg: 223.02 / Max: 223.08Min: 148.27 / Avg: 148.4 / Max: 148.581. (CC) gcc options: -pthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet-O3 -march=native-O3 -march=native -flto-O20.69981.39962.09942.79923.499SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 142.302.273.11-O3 -march=native - MIN: 2.18 / MAX: 3.19-O3 -march=native -flto - MIN: 2.21 / MAX: 5.8MIN: 3.05 / MAX: 9.871. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet-O3 -march=native-O3 -march=native -flto-O2246810Min: 2.19 / Avg: 2.3 / Max: 2.4Min: 2.25 / Avg: 2.27 / Max: 2.3Min: 3.07 / Avg: 3.11 / Max: 3.141. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3-O3 -march=native-O3 -march=native -flto-O2246810SE +/- 0.010, N = 3SE +/- 0.003, N = 3SE +/- 0.048, N = 35.4795.3767.304-march=native-march=native -flto-O21. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3-O3 -march=native-O3 -march=native -flto-O23691215Min: 5.47 / Avg: 5.48 / Max: 5.5Min: 5.37 / Avg: 5.38 / Max: 5.38Min: 7.23 / Avg: 7.3 / Max: 7.391. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2-O3 -march=native-O3 -march=native -flto-O20.9451.892.8353.784.725SE +/- 0.04, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 153.243.254.20-O3 -march=native - MIN: 3.1 / MAX: 6.67-O3 -march=native -flto - MIN: 3.14 / MAX: 6.68MIN: 4.04 / MAX: 7.771. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2-O3 -march=native-O3 -march=native -flto-O2246810Min: 3.19 / Avg: 3.24 / Max: 3.31Min: 3.23 / Avg: 3.25 / Max: 3.26Min: 4.14 / Avg: 4.2 / Max: 4.451. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet-O3 -march=native-O3 -march=native -flto-O248121620SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.14, N = 1511.8313.3415.15-O3 -march=native - MIN: 11.62 / MAX: 15.38-O3 -march=native -flto - MIN: 13.01 / MAX: 16.82MIN: 14.73 / MAX: 342.421. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet-O3 -march=native-O3 -march=native -flto-O248121620Min: 11.72 / Avg: 11.83 / Max: 11.89Min: 13.33 / Avg: 13.34 / Max: 13.36Min: 14.84 / Avg: 15.15 / Max: 17.121. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3-O3 -march=native-O3 -march=native -flto-O20.71551.4312.14652.8623.5775SE +/- 0.05, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 152.552.523.18-O3 -march=native - MIN: 2.43 / MAX: 6.16-O3 -march=native -flto - MIN: 2.47 / MAX: 6.06MIN: 3.1 / MAX: 6.781. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3-O3 -march=native-O3 -march=native -flto-O2246810Min: 2.47 / Avg: 2.55 / Max: 2.64Min: 2.51 / Avg: 2.52 / Max: 2.53Min: 3.14 / Avg: 3.18 / Max: 3.31. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Enhanced-O3 -march=native-O3 -march=native -flto-O260120180240300SE +/- 0.33, N = 3SE +/- 0.33, N = 3270269219-O3 -march=native-O3 -march=native -flto-O21. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Enhanced-O3 -march=native-O3 -march=native -flto-O250100150200250Min: 270 / Avg: 270.33 / Max: 271Min: 219 / Avg: 219.33 / Max: 2201. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0-O3 -march=native-O3 -march=native -flto-O21.17682.35363.53044.70725.884SE +/- 0.08, N = 3SE +/- 0.01, N = 3SE +/- 0.02, N = 154.384.325.23-O3 -march=native - MIN: 4.18 / MAX: 7.9-O3 -march=native -flto - MIN: 4.25 / MAX: 8.71MIN: 5.12 / MAX: 8.961. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0-O3 -march=native-O3 -march=native -flto-O2246810Min: 4.23 / Avg: 4.38 / Max: 4.51Min: 4.3 / Avg: 4.32 / Max: 4.34Min: 5.18 / Avg: 5.23 / Max: 5.461. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50-O3 -march=native-O3 -march=native -flto-O2510152025SE +/- 0.16, N = 3SE +/- 0.06, N = 3SE +/- 0.08, N = 1518.2318.4322.07-O3 -march=native - MIN: 17.76 / MAX: 22.08-O3 -march=native -flto - MIN: 18.19 / MAX: 22.12MIN: 21.33 / MAX: 27.941. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50-O3 -march=native-O3 -march=native -flto-O2510152025Min: 17.91 / Avg: 18.23 / Max: 18.4Min: 18.36 / Avg: 18.43 / Max: 18.55Min: 21.55 / Avg: 22.07 / Max: 22.341. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Sharpen-O3 -march=native-O3 -march=native -flto-O24080120160200SE +/- 0.88, N = 3SE +/- 0.67, N = 3SE +/- 0.33, N = 3195195164-O3 -march=native-O3 -march=native -flto-O21. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Sharpen-O3 -march=native-O3 -march=native -flto-O24080120160200Min: 194 / Avg: 195.33 / Max: 197Min: 194 / Avg: 194.67 / Max: 196Min: 164 / Avg: 164.33 / Max: 1651. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode-O3 -march=native-O3 -march=native -flto-O2246810SE +/- 0.007, N = 5SE +/- 0.033, N = 5SE +/- 0.030, N = 55.5875.5756.467-O3 -march=native-O3 -march=native -flto-O21. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode-O3 -march=native-O3 -march=native -flto-O23691215Min: 5.57 / Avg: 5.59 / Max: 5.61Min: 5.54 / Avg: 5.58 / Max: 5.71Min: 6.43 / Avg: 6.47 / Max: 6.591. (CXX) g++ options: -fvisibility=hidden -logg -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny-O3 -march=native-O3 -march=native -flto-O2612182430SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.09, N = 1520.2323.4521.07-O3 -march=native - MIN: 20.02 / MAX: 23.8-O3 -march=native -flto - MIN: 23.14 / MAX: 26.98MIN: 20.27 / MAX: 26.621. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny-O3 -march=native-O3 -march=native -flto-O2510152025Min: 20.16 / Avg: 20.23 / Max: 20.27Min: 23.37 / Avg: 23.45 / Max: 23.61Min: 20.43 / Avg: 21.07 / Max: 21.51. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

AOBench

AOBench is a lightweight ambient occlusion renderer, written in C. The test profile is using a size of 2048 x 2048. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAOBenchSize: 2048 x 2048 - Total Time-O3 -march=native-O3 -march=native -flto-O2612182430SE +/- 0.01, N = 3SE +/- 0.05, N = 3SE +/- 0.03, N = 321.5421.5824.46-march=native-march=native -flto-O21. (CC) gcc options: -lm -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterAOBenchSize: 2048 x 2048 - Total Time-O3 -march=native-O3 -march=native -flto-O2612182430Min: 21.52 / Avg: 21.54 / Max: 21.56Min: 21.49 / Avg: 21.58 / Max: 21.64Min: 24.42 / Avg: 24.46 / Max: 24.511. (CC) gcc options: -lm -O3

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Resizing-O3 -march=native-O3 -march=native -flto-O230060090012001500SE +/- 6.89, N = 3SE +/- 1.20, N = 3119812291091-O3 -march=native-O3 -march=native -flto-O21. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Resizing-O3 -march=native-O3 -march=native -flto-O22004006008001000Min: 1185 / Avg: 1198.33 / Max: 1208Min: 1227 / Avg: 1229.33 / Max: 12311. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread

Himeno Benchmark

The Himeno benchmark is a linear solver of pressure Poisson using a point-Jacobi method. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure Solver-O3 -march=native-O3 -march=native -flto-O215003000450060007500SE +/- 6.62, N = 3SE +/- 3.24, N = 3SE +/- 0.74, N = 36878.517079.886305.48-march=native-march=native -flto-O21. (CC) gcc options: -O3 -mavx2
OpenBenchmarking.orgMFLOPS, More Is BetterHimeno Benchmark 3.0Poisson Pressure Solver-O3 -march=native-O3 -march=native -flto-O212002400360048006000Min: 6867.78 / Avg: 6878.51 / Max: 6890.6Min: 7073.91 / Avg: 7079.88 / Max: 7085.05Min: 6304.2 / Avg: 6305.48 / Max: 6306.771. (CC) gcc options: -O3 -mavx2

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m-O3 -march=native-O3 -march=native -flto-O23691215SE +/- 0.03, N = 3SE +/- 0.06, N = 3SE +/- 0.02, N = 128.628.919.61-O3 -march=native - MIN: 8.51 / MAX: 12.11-O3 -march=native -flto - MIN: 8.72 / MAX: 12.48MIN: 9.44 / MAX: 13.781. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m-O3 -march=native-O3 -march=native -flto-O23691215Min: 8.57 / Avg: 8.62 / Max: 8.67Min: 8.82 / Avg: 8.91 / Max: 9.02Min: 9.54 / Avg: 9.61 / Max: 9.791. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet-O3 -march=native-O3 -march=native -flto-O23691215SE +/- 0.21, N = 3SE +/- 0.13, N = 3SE +/- 0.08, N = 1510.2010.2711.11-O3 -march=native - MIN: 9.72 / MAX: 13.93-O3 -march=native -flto - MIN: 9.93 / MAX: 13.87MIN: 10.75 / MAX: 16.771. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet-O3 -march=native-O3 -march=native -flto-O23691215Min: 9.78 / Avg: 10.2 / Max: 10.43Min: 10.01 / Avg: 10.27 / Max: 10.42Min: 10.82 / Avg: 11.11 / Max: 11.61. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless-O3 -march=native-O3 -march=native -flto-O248121620SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 312.9012.7113.76-O3 -march=native-O3 -march=native -flto-O21. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless-O3 -march=native-O3 -march=native -flto-O248121620Min: 12.88 / Avg: 12.9 / Max: 12.92Min: 12.68 / Avg: 12.71 / Max: 12.74Min: 13.76 / Avg: 13.76 / Max: 13.771. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57-O3 -march=native-O3 -march=native -flto-O2150M300M450M600M750MSE +/- 2160717.47, N = 3SE +/- 2050604.25, N = 3SE +/- 766753.62, N = 3686530000684356667635506667-march=native-march=native -flto-O21. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57-O3 -march=native-O3 -march=native -flto-O2120M240M360M480M600MMin: 683130000 / Avg: 686530000 / Max: 690540000Min: 682090000 / Avg: 684356666.67 / Max: 688450000Min: 634720000 / Avg: 635506666.67 / Max: 6370400001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2-O3 -march=native-O3 -march=native -flto-O250100150200250SE +/- 0.15, N = 3SE +/- 0.12, N = 3SE +/- 0.21, N = 3230.02247.89243.42-O3 -march=native - MIN: 229.3 / MAX: 233.4-O3 -march=native -flto - MIN: 247.03 / MAX: 249.92MIN: 241.9 / MAX: 246.461. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O2 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2-O3 -march=native-O3 -march=native -flto-O24080120160200Min: 229.82 / Avg: 230.02 / Max: 230.3Min: 247.65 / Avg: 247.89 / Max: 248.06Min: 243.02 / Avg: 243.42 / Max: 243.751. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O2 -rdynamic -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O21.21522.43043.64564.86086.076SE +/- 0.02936, N = 3SE +/- 0.01659, N = 3SE +/- 0.03422, N = 35.287265.400805.01199MIN: 4.8-flto - MIN: 4.78MIN: 4.471. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O2246810Min: 5.23 / Avg: 5.29 / Max: 5.32Min: 5.38 / Avg: 5.4 / Max: 5.43Min: 4.95 / Avg: 5.01 / Max: 5.061. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Rotate-O3 -march=native-O3 -march=native -flto-O22004006008001000SE +/- 1.53, N = 3SE +/- 0.67, N = 3114110721066-O3 -march=native-O3 -march=native -flto-O21. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread
OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Rotate-O3 -march=native-O3 -march=native -flto-O22004006008001000Min: 1069 / Avg: 1072 / Max: 1074Min: 1065 / Avg: 1066.33 / Max: 10671. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: Exhaustive-O3 -march=native-O3 -march=native -flto-O220406080100SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 385.4285.4291.38-O3 -march=native-O3 -march=native1. (CXX) g++ options: -O2 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: Exhaustive-O3 -march=native-O3 -march=native -flto-O220406080100Min: 85.41 / Avg: 85.42 / Max: 85.42Min: 85.41 / Avg: 85.42 / Max: 85.44Min: 91.35 / Avg: 91.38 / Max: 91.41. (CXX) g++ options: -O2 -flto -pthread

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1-O3 -march=native-O3 -march=native -flto-O250100150200250SE +/- 0.17, N = 3SE +/- 0.12, N = 3SE +/- 0.09, N = 3227.66242.55236.05-O3 -march=native - MIN: 226.71 / MAX: 229.36-O3 -march=native -flto - MIN: 241.93 / MAX: 243.45MIN: 234.65 / MAX: 236.771. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O2 -rdynamic -ldl
OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1-O3 -march=native-O3 -march=native -flto-O24080120160200Min: 227.33 / Avg: 227.66 / Max: 227.86Min: 242.31 / Avg: 242.55 / Max: 242.72Min: 235.89 / Avg: 236.05 / Max: 236.21. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O2 -rdynamic -ldl

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: Thorough-O3 -march=native-O3 -march=native -flto-O23691215SE +/- 0.02, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 311.3811.4012.09-O3 -march=native-O3 -march=native1. (CXX) g++ options: -O2 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: Thorough-O3 -march=native-O3 -march=native -flto-O248121620Min: 11.34 / Avg: 11.38 / Max: 11.41Min: 11.37 / Avg: 11.4 / Max: 11.41Min: 12.07 / Avg: 12.09 / Max: 12.111. (CXX) g++ options: -O2 -flto -pthread

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis-O3 -march=native-O3 -march=native -flto-O2510152025SE +/- 0.06, N = 4SE +/- 0.05, N = 4SE +/- 0.05, N = 421.7122.6021.33-O3 -march=native-O3 -march=native -flto-O21. (CC) gcc options: -std=c99 -lpthread -lm
OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis-O3 -march=native-O3 -march=native -flto-O2510152025Min: 21.52 / Avg: 21.71 / Max: 21.79Min: 22.45 / Avg: 22.6 / Max: 22.68Min: 21.19 / Avg: 21.32 / Max: 21.441. (CC) gcc options: -std=c99 -lpthread -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O248121620SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.17, N = 516.5017.4216.69MIN: 16.39-flto - MIN: 17.27MIN: 16.381. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O248121620Min: 16.49 / Avg: 16.5 / Max: 16.52Min: 17.42 / Avg: 17.42 / Max: 17.42Min: 16.48 / Avg: 16.69 / Max: 17.381. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Compression Speed-O3 -march=native-O3 -march=native -flto-O260120180240300SE +/- 2.26, N = 3SE +/- 3.12, N = 4SE +/- 1.68, N = 3285.3281.1296.0-O3 -march=native-O3 -march=native -flto-O21. (CC) gcc options: -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Compression Speed-O3 -march=native-O3 -march=native -flto-O250100150200250Min: 281.2 / Avg: 285.27 / Max: 289Min: 272.6 / Avg: 281.1 / Max: 287.4Min: 292.9 / Avg: 295.97 / Max: 298.71. (CC) gcc options: -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Decompression Speed-O3 -march=native-O3 -march=native -flto-O212002400360048006000SE +/- 15.18, N = 3SE +/- 6.81, N = 4SE +/- 5.74, N = 35546.05477.95760.9-O3 -march=native-O3 -march=native -flto-O21. (CC) gcc options: -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Decompression Speed-O3 -march=native-O3 -march=native -flto-O210002000300040005000Min: 5524.2 / Avg: 5546 / Max: 5575.2Min: 5460 / Avg: 5477.9 / Max: 5493.1Min: 5749.5 / Avg: 5760.93 / Max: 5767.51. (CC) gcc options: -pthread -lz

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest Compression-O3 -march=native-O3 -march=native -flto-O21.2062.4123.6184.8246.03SE +/- 0.014, N = 3SE +/- 0.008, N = 3SE +/- 0.005, N = 35.1275.1035.360-O3 -march=native-O3 -march=native -flto-O21. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Highest Compression-O3 -march=native-O3 -march=native -flto-O2246810Min: 5.11 / Avg: 5.13 / Max: 5.15Min: 5.09 / Avg: 5.1 / Max: 5.11Min: 5.35 / Avg: 5.36 / Max: 5.371. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speed-O3 -march=native-O3 -march=native -flto-O210002000300040005000SE +/- 8.15, N = 3SE +/- 17.62, N = 3SE +/- 5.61, N = 34514.84503.14718.1-O3 -march=native-O3 -march=native -flto-O21. (CC) gcc options: -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speed-O3 -march=native-O3 -march=native -flto-O28001600240032004000Min: 4500.4 / Avg: 4514.8 / Max: 4528.6Min: 4468.4 / Avg: 4503.07 / Max: 4525.9Min: 4707.1 / Avg: 4718.1 / Max: 4725.51. (CC) gcc options: -pthread -lz

libjpeg-turbo tjbench

tjbench is a JPEG decompression/compression benchmark that is part of libjpeg-turbo, a JPEG image codec library optimized for SIMD instructions on modern CPU architectures. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression Throughput-O3 -march=native-O3 -march=native -flto-O260120180240300SE +/- 0.20, N = 3SE +/- 0.41, N = 3SE +/- 0.16, N = 3273.10272.60261.03-march=native -lm-march=native -flto -lm-O21. (CC) gcc options: -O3 -rdynamic
OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression Throughput-O3 -march=native-O3 -march=native -flto-O250100150200250Min: 272.73 / Avg: 273.1 / Max: 273.42Min: 271.9 / Avg: 272.6 / Max: 273.3Min: 260.74 / Avg: 261.03 / Max: 261.281. (CC) gcc options: -O3 -rdynamic

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O23691215SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 311.2411.2310.75MIN: 11.15-flto - MIN: 11.14MIN: 10.651. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O23691215Min: 11.24 / Avg: 11.24 / Max: 11.24Min: 11.22 / Avg: 11.23 / Max: 11.24Min: 10.74 / Avg: 10.75 / Max: 10.761. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

Smallpt

Smallpt is a C++ global illumination renderer written in less than 100 lines of code. Global illumination is done via unbiased Monte Carlo path tracing and there is multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSmallpt 1.0Global Illumination Renderer; 128 Samples-O3 -march=native-O3 -march=native -flto-O2246810SE +/- 0.012, N = 3SE +/- 0.020, N = 3SE +/- 0.014, N = 38.4058.4548.771-march=native-march=native -flto-O21. (CXX) g++ options: -fopenmp -O3
OpenBenchmarking.orgSeconds, Fewer Is BetterSmallpt 1.0Global Illumination Renderer; 128 Samples-O3 -march=native-O3 -march=native -flto-O23691215Min: 8.39 / Avg: 8.4 / Max: 8.43Min: 8.42 / Avg: 8.45 / Max: 8.49Min: 8.75 / Avg: 8.77 / Max: 8.791. (CXX) g++ options: -fopenmp -O3

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speed-O3 -march=native-O3 -march=native -flto-O210002000300040005000SE +/- 11.11, N = 3SE +/- 14.15, N = 3SE +/- 4.91, N = 34582.34579.84777.3-O3 -march=native-O3 -march=native -flto-O21. (CC) gcc options: -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speed-O3 -march=native-O3 -march=native -flto-O28001600240032004000Min: 4567.7 / Avg: 4582.3 / Max: 4604.1Min: 4560.9 / Avg: 4579.8 / Max: 4607.5Min: 4768.9 / Avg: 4777.3 / Max: 4785.91. (CC) gcc options: -pthread -lz

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080p-O3 -march=native-O3 -march=native -flto-O2306090120150SE +/- 1.58, N = 4SE +/- 1.44, N = 5SE +/- 1.53, N = 4139.13141.83136.31-march=native-march=native -flto1. (CC) gcc options: -O3 -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080p-O3 -march=native-O3 -march=native -flto-O2306090120150Min: 134.44 / Avg: 139.13 / Max: 141.28Min: 136.09 / Avg: 141.83 / Max: 143.58Min: 131.75 / Avg: 136.31 / Max: 138.121. (CC) gcc options: -O3 -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd-O3 -march=native-O3 -march=native -flto-O248121620SE +/- 0.12, N = 3SE +/- 0.28, N = 3SE +/- 0.01, N = 1515.5315.9216.15-O3 -march=native - MIN: 15.19 / MAX: 20.95-O3 -march=native -flto - MIN: 15.55 / MAX: 21.06MIN: 15.95 / MAX: 21.541. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd-O3 -march=native-O3 -march=native -flto-O248121620Min: 15.32 / Avg: 15.53 / Max: 15.75Min: 15.64 / Avg: 15.92 / Max: 16.48Min: 16.07 / Avg: 16.15 / Max: 16.31. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein-O3 -march=native-O3 -march=native -flto-O2246810SE +/- 0.063, N = 15SE +/- 0.055, N = 15SE +/- 0.106, N = 38.0678.3288.023-O3 -march=native-O3 -march=native -flto1. (CXX) g++ options: -O2 -pthread -lm
OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein-O3 -march=native-O3 -march=native -flto-O23691215Min: 7.6 / Avg: 8.07 / Max: 8.44Min: 7.88 / Avg: 8.33 / Max: 8.62Min: 7.83 / Avg: 8.02 / Max: 8.21. (CXX) g++ options: -O2 -pthread -lm

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080p-O3 -march=native-O3 -march=native -flto-O24080120160200SE +/- 0.01, N = 3SE +/- 0.31, N = 3SE +/- 0.13, N = 3164.77166.05160.65-march=native-march=native -flto1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080p-O3 -march=native-O3 -march=native -flto-O2306090120150Min: 164.75 / Avg: 164.77 / Max: 164.79Min: 165.69 / Avg: 166.05 / Max: 166.66Min: 160.5 / Avg: 160.65 / Max: 160.91. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.2Pfam Database Search-O3 -march=native-O3 -march=native -flto-O220406080100SE +/- 0.04, N = 3SE +/- 0.08, N = 3SE +/- 0.08, N = 3100.7499.97103.29-O3 -march=native-O3 -march=native -flto-O21. (CC) gcc options: -pthread -lhmmer -leasel -lm -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.2Pfam Database Search-O3 -march=native-O3 -march=native -flto-O220406080100Min: 100.69 / Avg: 100.74 / Max: 100.81Min: 99.85 / Avg: 99.97 / Max: 100.11Min: 103.19 / Avg: 103.29 / Max: 103.441. (CC) gcc options: -pthread -lhmmer -leasel -lm -lmpi

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total Time-O3 -march=native-O3 -march=native -flto-O26M12M18M24M30MSE +/- 279559.22, N = 3SE +/- 94171.94, N = 3SE +/- 96950.30, N = 3299324412908639429094819-march=native-march=native-O21. (CXX) g++ options: -lgcov -m64 -lpthread -O3 -fno-exceptions -std=c++17 -pedantic -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -fprofile-use -fno-peel-loops -fno-tracer -flto=jobserver
OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total Time-O3 -march=native-O3 -march=native -flto-O25M10M15M20M25MMin: 29637169 / Avg: 29932440.67 / Max: 30491259Min: 28901173 / Avg: 29086394 / Max: 29208584Min: 28925390 / Avg: 29094819.33 / Max: 292611941. (CXX) g++ options: -lgcov -m64 -lpthread -O3 -fno-exceptions -std=c++17 -pedantic -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -fprofile-use -fno-peel-loops -fno-tracer -flto=jobserver

WebP Image Encode

This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest Compression-O3 -march=native-O3 -march=native -flto-O2714212835SE +/- 0.05, N = 3SE +/- 0.04, N = 3SE +/- 0.02, N = 327.2627.0727.84-O3 -march=native-O3 -march=native -flto-O21. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg
OpenBenchmarking.orgEncode Time - Seconds, Fewer Is BetterWebP Image Encode 1.1Encode Settings: Quality 100, Lossless, Highest Compression-O3 -march=native-O3 -march=native -flto-O2612182430Min: 27.18 / Avg: 27.26 / Max: 27.34Min: 26.99 / Avg: 27.07 / Max: 27.15Min: 27.81 / Avg: 27.84 / Max: 27.881. (CC) gcc options: -fvisibility=hidden -pthread -lm -ljpeg

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18-O3 -march=native-O3 -march=native -flto-O23691215SE +/- 0.17, N = 3SE +/- 0.02, N = 3SE +/- 0.06, N = 1411.0811.3911.30-O3 -march=native - MIN: 10.66 / MAX: 16.66-O3 -march=native -flto - MIN: 11.27 / MAX: 15.15MIN: 10.84 / MAX: 14.991. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18-O3 -march=native-O3 -march=native -flto-O23691215Min: 10.74 / Avg: 11.08 / Max: 11.26Min: 11.37 / Avg: 11.39 / Max: 11.42Min: 10.94 / Avg: 11.3 / Max: 11.511. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny Analysis-O3 -march=native-O3 -march=native -flto-O220406080100SE +/- 0.06, N = 3SE +/- 0.32, N = 3SE +/- 0.53, N = 386.7084.9387.30-march=native-march=native -flto-O21. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mmpx -mabm -O3 -std=c99 -pedantic -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny Analysis-O3 -march=native-O3 -march=native -flto-O220406080100Min: 86.57 / Avg: 86.7 / Max: 86.79Min: 84.39 / Avg: 84.93 / Max: 85.49Min: 86.28 / Avg: 87.3 / Max: 88.041. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mmpx -mabm -O3 -std=c99 -pedantic -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O21.09712.19423.29134.38845.4855SE +/- 0.02065, N = 3SE +/- 0.02041, N = 3SE +/- 0.02350, N = 34.865224.746114.87601MIN: 3.82-flto - MIN: 3.7MIN: 3.821. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O2246810Min: 4.83 / Avg: 4.87 / Max: 4.9Min: 4.71 / Avg: 4.75 / Max: 4.78Min: 4.83 / Avg: 4.88 / Max: 4.911. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O2400800120016002000SE +/- 1.66, N = 3SE +/- 1.29, N = 3SE +/- 0.68, N = 31891.711877.511841.63MIN: 1880.74-flto - MIN: 1866.09MIN: 1831.741. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O230060090012001500Min: 1889.56 / Avg: 1891.71 / Max: 1894.97Min: 1875.02 / Avg: 1877.51 / Max: 1879.33Min: 1840.35 / Avg: 1841.63 / Max: 1842.681. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K-O3 -march=native-O3 -march=native -flto-O248121620SE +/- 0.13, N = 15SE +/- 0.15, N = 6SE +/- 0.21, N = 315.8115.4015.64-O3 -march=native-O3 -march=native -flto1. (CXX) g++ options: -O2 -rdynamic -lpthread -lrt -ldl
OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K-O3 -march=native-O3 -march=native -flto-O248121620Min: 14.32 / Avg: 15.81 / Max: 16.41Min: 14.82 / Avg: 15.4 / Max: 15.89Min: 15.37 / Avg: 15.64 / Max: 16.051. (CXX) g++ options: -O2 -rdynamic -lpthread -lrt -ldl

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC format five times. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLAC-O3 -march=native-O3 -march=native -flto-O2246810SE +/- 0.004, N = 5SE +/- 0.003, N = 5SE +/- 0.003, N = 55.9315.9366.086-O3 -march=native-O3 -march=native -flto-O21. (CXX) g++ options: -fvisibility=hidden -logg -lm
OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLAC-O3 -march=native-O3 -march=native -flto-O2246810Min: 5.92 / Avg: 5.93 / Max: 5.94Min: 5.93 / Avg: 5.94 / Max: 5.94Min: 6.07 / Avg: 6.09 / Max: 6.091. (CXX) g++ options: -fvisibility=hidden -logg -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speed-O3 -march=native-O3 -march=native -flto-O2816243240SE +/- 0.44, N = 3SE +/- 0.03, N = 3SE +/- 0.15, N = 335.434.834.5-O3 -march=native-O3 -march=native -flto-O21. (CC) gcc options: -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speed-O3 -march=native-O3 -march=native -flto-O2816243240Min: 34.7 / Avg: 35.4 / Max: 36.2Min: 34.7 / Avg: 34.77 / Max: 34.8Min: 34.3 / Avg: 34.5 / Max: 34.81. (CC) gcc options: -pthread -lz

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O2400800120016002000SE +/- 0.80, N = 3SE +/- 1.27, N = 3SE +/- 1.67, N = 31887.611876.251842.14MIN: 1877.87-flto - MIN: 1866.41MIN: 1831.931. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O230060090012001500Min: 1886.19 / Avg: 1887.61 / Max: 1888.94Min: 1874.54 / Avg: 1876.25 / Max: 1878.73Min: 1840.24 / Avg: 1842.14 / Max: 1845.471. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O2400800120016002000SE +/- 2.13, N = 3SE +/- 1.23, N = 3SE +/- 1.34, N = 31890.591874.701845.74MIN: 1879.82-flto - MIN: 1865.22MIN: 1834.841. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O230060090012001500Min: 1888.13 / Avg: 1890.59 / Max: 1894.84Min: 1873.15 / Avg: 1874.7 / Max: 1877.12Min: 1843.96 / Avg: 1845.74 / Max: 1848.361. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080p-O3 -march=native-O3 -march=native -flto-O24080120160200SE +/- 1.48, N = 10SE +/- 1.49, N = 10SE +/- 1.51, N = 10195.87195.07191.83-march=native-march=native -flto1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080p-O3 -march=native-O3 -march=native -flto-O24080120160200Min: 182.72 / Avg: 195.87 / Max: 198.69Min: 181.73 / Avg: 195.07 / Max: 197.3Min: 178.35 / Avg: 191.83 / Max: 194.191. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm

PJSIP

PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: INVITE-O3 -march=native-O3 -march=native -flto-O211002200330044005500SE +/- 41.25, N = 3SE +/- 3.18, N = 3SE +/- 32.83, N = 3495950585001-O3 -march=native-O3 -march=native -flto-O21. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread
OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: INVITE-O3 -march=native-O3 -march=native -flto-O29001800270036004500Min: 4904 / Avg: 4959.33 / Max: 5040Min: 5053 / Avg: 5058.33 / Max: 5064Min: 4938 / Avg: 5000.67 / Max: 50491. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 4K-O3 -march=native-O24080120160200SE +/- 0.09, N = 3SE +/- 0.05, N = 3190.31186.75-O3 -march=native - MIN: 174.59 / MAX: 201.24-O2 -lm - MIN: 170.98 / MAX: 196.551. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 4K-O3 -march=native-O2306090120150Min: 190.17 / Avg: 190.31 / Max: 190.48Min: 186.67 / Avg: 186.75 / Max: 186.841. (CC) gcc options: -pthread

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080p-O3 -march=native-O3 -march=native -flto-O260120180240300SE +/- 0.09, N = 3SE +/- 0.22, N = 3SE +/- 0.52, N = 3278.72278.59273.60-march=native-march=native -flto1. (CC) gcc options: -O3 -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080p-O3 -march=native-O3 -march=native -flto-O250100150200250Min: 278.55 / Avg: 278.72 / Max: 278.81Min: 278.16 / Avg: 278.59 / Max: 278.81Min: 272.85 / Avg: 273.6 / Max: 274.61. (CC) gcc options: -O3 -fPIE -fPIC -O2 -pie -rdynamic -lpthread -lrt

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p-O3 -march=native-O3 -march=native -flto-O24080120160200SE +/- 0.28, N = 3SE +/- 0.29, N = 3SE +/- 0.06, N = 3201.70201.10198.01-march=native-march=native -flto1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p-O3 -march=native-O3 -march=native -flto-O24080120160200Min: 201.15 / Avg: 201.7 / Max: 202.07Min: 200.52 / Avg: 201.1 / Max: 201.41Min: 197.9 / Avg: 198.01 / Max: 198.091. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET-O3 -march=native-O3 -march=native -flto-O2600K1200K1800K2400K3000KSE +/- 15075.35, N = 3SE +/- 3890.24, N = 3SE +/- 20903.58, N = 32980192.002990164.922936296.08-march=native-march=native -flto-O21. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET-O3 -march=native-O3 -march=native -flto-O2500K1000K1500K2000K2500KMin: 2962963 / Avg: 2980192 / Max: 3010234.75Min: 2983293.5 / Avg: 2990164.92 / Max: 2996761.25Min: 2900241.25 / Avg: 2936296.08 / Max: 2972651.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57-O3 -march=native-O3 -march=native -flto-O2150M300M450M600M750MSE +/- 209549.78, N = 3SE +/- 322714.18, N = 3SE +/- 189414.30, N = 3722893333722393333711343333-march=native-march=native -flto-O21. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57-O3 -march=native-O3 -march=native -flto-O2130M260M390M520M650MMin: 722480000 / Avg: 722893333.33 / Max: 723160000Min: 721800000 / Avg: 722393333.33 / Max: 722910000Min: 711120000 / Avg: 711343333.33 / Max: 7117200001. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O27001400210028003500SE +/- 2.46, N = 3SE +/- 3.52, N = 3SE +/- 0.76, N = 33173.473152.893124.56MIN: 3161.04-flto - MIN: 3137.49MIN: 3112.251. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O26001200180024003000Min: 3169.24 / Avg: 3173.47 / Max: 3177.75Min: 3146 / Avg: 3152.89 / Max: 3157.58Min: 3123.17 / Avg: 3124.56 / Max: 3125.781. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O27001400210028003500SE +/- 1.30, N = 3SE +/- 0.32, N = 3SE +/- 5.44, N = 33172.193148.673123.64MIN: 3159.8-flto - MIN: 3137.59MIN: 3105.421. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O26001200180024003000Min: 3170.04 / Avg: 3172.19 / Max: 3174.52Min: 3148.14 / Avg: 3148.67 / Max: 3149.23Min: 3115.28 / Avg: 3123.64 / Max: 3133.861. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.7Input: AUSURF112-O3 -march=native-O3 -march=native -flto-O26001200180024003000SE +/- 21.65, N = 3SE +/- 24.60, N = 3SE +/- 18.09, N = 32576.972540.192538.251. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.7Input: AUSURF112-O3 -march=native-O3 -march=native -flto-O2400800120016002000Min: 2549.22 / Avg: 2576.97 / Max: 2619.62Min: 2512.24 / Avg: 2540.19 / Max: 2589.24Min: 2518.02 / Avg: 2538.25 / Max: 2574.341. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O27001400210028003500SE +/- 0.26, N = 3SE +/- 3.24, N = 3SE +/- 2.61, N = 33171.463154.693123.95MIN: 3160.11-flto - MIN: 3138.34MIN: 3109.771. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O26001200180024003000Min: 3170.96 / Avg: 3171.46 / Max: 3171.85Min: 3148.26 / Avg: 3154.69 / Max: 3158.66Min: 3120.05 / Avg: 3123.95 / Max: 3128.91. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: Medium-O3 -march=native-O3 -march=native -flto-O21.18082.36163.54244.72325.904SE +/- 0.0013, N = 3SE +/- 0.0065, N = 3SE +/- 0.0027, N = 35.18205.17055.2481-O3 -march=native-O3 -march=native1. (CXX) g++ options: -O2 -flto -pthread
OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 2.4Preset: Medium-O3 -march=native-O3 -march=native -flto-O2246810Min: 5.18 / Avg: 5.18 / Max: 5.18Min: 5.16 / Avg: 5.17 / Max: 5.18Min: 5.24 / Avg: 5.25 / Max: 5.251. (CXX) g++ options: -O2 -flto -pthread

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 1080p-O3 -march=native-O2160320480640800SE +/- 1.03, N = 3SE +/- 2.55, N = 3717.31727.60-O3 -march=native - MIN: 641.13 / MAX: 782.17-O2 -lm - MIN: 643.78 / MAX: 798.321. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Summer Nature 1080p-O3 -march=native-O2130260390520650Min: 715.44 / Avg: 717.31 / Max: 719Min: 722.62 / Avg: 727.6 / Max: 731.021. (CC) gcc options: -pthread

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Chimera 1080p-O3 -march=native-O2170340510680850SE +/- 0.33, N = 3SE +/- 1.36, N = 3763.05773.93-O3 -march=native - MIN: 584.4 / MAX: 1127.78-O2 -lm - MIN: 589.24 / MAX: 1160.821. (CC) gcc options: -pthread
OpenBenchmarking.orgFPS, More Is Betterdav1d 0.8.2Video Input: Chimera 1080p-O3 -march=native-O2140280420560700Min: 762.59 / Avg: 763.05 / Max: 763.69Min: 771.21 / Avg: 773.93 / Max: 775.421. (CC) gcc options: -pthread

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second-O3 -march=native-O3 -march=native -flto-O290K180K270K360K450KSE +/- 1364.82, N = 3SE +/- 166.46, N = 3SE +/- 1236.61, N = 3432583.96435901.44430127.50-O3 -march=native-O3 -march=native -flto1. (CC) gcc options: -O2 -lrt" -lrt
OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second-O3 -march=native-O3 -march=native -flto-O280K160K240K320K400KMin: 429857.16 / Avg: 432583.96 / Max: 434055.25Min: 435571.69 / Avg: 435901.44 / Max: 436105.94Min: 427750.3 / Avg: 430127.5 / Max: 431907.141. (CC) gcc options: -O2 -lrt" -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O20.96931.93862.90793.87724.8465SE +/- 0.01947, N = 3SE +/- 0.00501, N = 3SE +/- 0.01250, N = 34.307984.251764.27077MIN: 4.19-flto - MIN: 4.15MIN: 4.161. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O2246810Min: 4.29 / Avg: 4.31 / Max: 4.35Min: 4.24 / Avg: 4.25 / Max: 4.26Min: 4.26 / Avg: 4.27 / Max: 4.31. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O23691215SE +/- 0.01, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 312.5112.5212.37MIN: 12.43-flto - MIN: 12.41MIN: 12.281. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O248121620Min: 12.5 / Avg: 12.51 / Max: 12.53Min: 12.52 / Avg: 12.52 / Max: 12.53Min: 12.36 / Avg: 12.37 / Max: 12.371. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16-O3 -march=native-O3 -march=native -flto-O21224364860SE +/- 0.14, N = 3SE +/- 0.13, N = 3SE +/- 0.05, N = 1554.5054.1354.80-O3 -march=native - MIN: 53.96 / MAX: 58.57-O3 -march=native -flto - MIN: 53.54 / MAX: 59.11MIN: 54.15 / MAX: 641. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16-O3 -march=native-O3 -march=native -flto-O21122334455Min: 54.3 / Avg: 54.5 / Max: 54.76Min: 53.89 / Avg: 54.13 / Max: 54.33Min: 54.52 / Avg: 54.8 / Max: 55.051. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O20.33190.66380.99571.32761.6595SE +/- 0.00602, N = 3SE +/- 0.00575, N = 3SE +/- 0.01597, N = 31.457881.475241.46726MIN: 1.36-flto - MIN: 1.37MIN: 1.371. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O2246810Min: 1.45 / Avg: 1.46 / Max: 1.47Min: 1.47 / Avg: 1.48 / Max: 1.49Min: 1.44 / Avg: 1.47 / Max: 1.491. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O20.71331.42662.13992.85323.5665SE +/- 0.00399, N = 3SE +/- 0.00623, N = 3SE +/- 0.00129, N = 33.170263.139413.13532MIN: 3.1-flto - MIN: 3.07MIN: 3.071. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O2246810Min: 3.16 / Avg: 3.17 / Max: 3.18Min: 3.13 / Avg: 3.14 / Max: 3.15Min: 3.13 / Avg: 3.14 / Max: 3.141. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000-O3 -march=native-O3 -march=native -flto-O21020304050SE +/- 0.30, N = 3SE +/- 0.13, N = 3SE +/- 0.15, N = 344.0943.7843.62-O3 -march=native-O3 -march=native -flto-O21. (CC) gcc options: -ldl -lz -lpthread
OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000-O3 -march=native-O3 -march=native -flto-O2918273645Min: 43.68 / Avg: 44.08 / Max: 44.67Min: 43.63 / Avg: 43.78 / Max: 44.04Min: 43.37 / Avg: 43.62 / Max: 43.891. (CC) gcc options: -ldl -lz -lpthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speed-O3 -march=native-O3 -march=native -flto-O2816243240SE +/- 0.23, N = 3SE +/- 0.12, N = 3SE +/- 0.22, N = 333.032.832.7-O3 -march=native-O3 -march=native -flto-O21. (CC) gcc options: -pthread -lz
OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speed-O3 -march=native-O3 -march=native -flto-O2714212835Min: 32.6 / Avg: 32.97 / Max: 33.4Min: 32.6 / Avg: 32.83 / Max: 33Min: 32.3 / Avg: 32.73 / Max: 331. (CC) gcc options: -pthread -lz

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O248121620SE +/- 0.01, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 314.2914.2514.17MIN: 14.18-flto - MIN: 14.14MIN: 14.041. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O248121620Min: 14.27 / Avg: 14.29 / Max: 14.3Min: 14.23 / Avg: 14.25 / Max: 14.27Min: 14.15 / Avg: 14.17 / Max: 14.181. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet-O3 -march=native-O3 -march=native -flto-O23691215SE +/- 0.01, N = 3SE +/- 0.02, N = 3SE +/- 0.01, N = 159.639.709.63-O3 -march=native - MIN: 9.56 / MAX: 13.14-O3 -march=native -flto - MIN: 9.56 / MAX: 13.19MIN: 9.47 / MAX: 14.511. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet-O3 -march=native-O3 -march=native -flto-O23691215Min: 9.62 / Avg: 9.63 / Max: 9.64Min: 9.67 / Avg: 9.7 / Max: 9.74Min: 9.57 / Avg: 9.63 / Max: 9.71. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

PJSIP

PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, Stateless-O3 -march=native-O3 -march=native -flto-O250K100K150K200K250KSE +/- 1015.58, N = 3SE +/- 101.47, N = 3SE +/- 504.43, N = 3241439239892239792-O3 -march=native-O3 -march=native -flto-O21. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread
OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, Stateless-O3 -march=native-O3 -march=native -flto-O240K80K120K160K200KMin: 239674 / Avg: 241439 / Max: 243192Min: 239735 / Avg: 239892.33 / Max: 240082Min: 238831 / Avg: 239792.33 / Max: 2405381. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O20.16250.3250.48750.650.8125SE +/- 0.002639, N = 3SE +/- 0.001704, N = 3SE +/- 0.001308, N = 30.7224300.7204820.717882MIN: 0.67-flto - MIN: 0.67MIN: 0.661. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O2246810Min: 0.72 / Avg: 0.72 / Max: 0.73Min: 0.72 / Avg: 0.72 / Max: 0.72Min: 0.72 / Avg: 0.72 / Max: 0.721. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed Algorithms-O3 -march=native-O3 -march=native -flto-O2110220330440550SE +/- 0.14, N = 3SE +/- 0.29, N = 3SE +/- 0.06, N = 3489.76488.63491.64-O3 -march=native-O3 -march=native -flto-O21. (CXX) g++ options: -fPIC -pthread -pipe
OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed Algorithms-O3 -march=native-O3 -march=native -flto-O290180270360450Min: 489.53 / Avg: 489.76 / Max: 490.02Min: 488.19 / Avg: 488.63 / Max: 489.18Min: 491.53 / Avg: 491.64 / Max: 491.721. (CXX) g++ options: -fPIC -pthread -pipe

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET-O3 -march=native-O3 -march=native -flto-O2900K1800K2700K3600K4500KSE +/- 16885.42, N = 3SE +/- 23615.46, N = 3SE +/- 8839.00, N = 34036791.924060369.084051463.17-march=native-march=native -flto-O21. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET-O3 -march=native-O3 -march=native -flto-O2700K1400K2100K2800K3500KMin: 4003228.25 / Avg: 4036791.92 / Max: 4056808.25Min: 4016064.25 / Avg: 4060369.08 / Max: 4096694.75Min: 4038836.75 / Avg: 4051463.17 / Max: 4068491.51. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O20.91491.82982.74473.65964.5745SE +/- 0.00741, N = 3SE +/- 0.00379, N = 3SE +/- 0.00867, N = 34.066174.044814.04477MIN: 3.93-flto - MIN: 3.91MIN: 3.931. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O2246810Min: 4.05 / Avg: 4.07 / Max: 4.07Min: 4.04 / Avg: 4.04 / Max: 4.05Min: 4.03 / Avg: 4.04 / Max: 4.061. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O248121620SE +/- 0.00, N = 3SE +/- 0.01, N = 3SE +/- 0.00, N = 317.0617.1317.06MIN: 16.72-flto - MIN: 16.73MIN: 16.671. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O248121620Min: 17.05 / Avg: 17.06 / Max: 17.06Min: 17.11 / Avg: 17.13 / Max: 17.14Min: 17.06 / Avg: 17.06 / Max: 17.061. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O20.7951.592.3853.183.975SE +/- 0.00143, N = 3SE +/- 0.00147, N = 3SE +/- 0.00099, N = 33.527913.523813.53315MIN: 3.45-flto - MIN: 3.46MIN: 3.471. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O2246810Min: 3.53 / Avg: 3.53 / Max: 3.53Min: 3.52 / Avg: 3.52 / Max: 3.53Min: 3.53 / Avg: 3.53 / Max: 3.531. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O20.18710.37420.56130.74840.9355SE +/- 0.003135, N = 3SE +/- 0.003541, N = 3SE +/- 0.003232, N = 30.8296370.8316990.829564MIN: 0.81-flto - MIN: 0.81MIN: 0.811. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O2246810Min: 0.82 / Avg: 0.83 / Max: 0.83Min: 0.82 / Avg: 0.83 / Max: 0.84Min: 0.82 / Avg: 0.83 / Max: 0.831. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack-O3 -march=native-O3 -march=native -flto-O23691215SE +/- 0.00, N = 5SE +/- 0.00, N = 5SE +/- 0.00, N = 511.0811.1011.08-O3 -march=native-O3 -march=native -flto-O21. (CXX) g++ options: -rdynamic
OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack-O3 -march=native-O3 -march=native -flto-O23691215Min: 11.08 / Avg: 11.08 / Max: 11.1Min: 11.09 / Avg: 11.1 / Max: 11.11Min: 11.07 / Avg: 11.08 / Max: 11.091. (CXX) g++ options: -rdynamic

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O20.29770.59540.89311.19081.4885SE +/- 0.00166, N = 3SE +/- 0.00175, N = 3SE +/- 0.00212, N = 31.322711.323111.32100MIN: 1.26-flto - MIN: 1.26MIN: 1.251. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O2246810Min: 1.32 / Avg: 1.32 / Max: 1.33Min: 1.32 / Avg: 1.32 / Max: 1.33Min: 1.32 / Avg: 1.32 / Max: 1.321. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

PJSIP

PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, Stateful-O3 -march=native-O3 -march=native -flto-O22K4K6K8K10KSE +/- 6.96, N = 3SE +/- 4.58, N = 3SE +/- 1.67, N = 3938993959381-O3 -march=native-O3 -march=native -flto-O21. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread
OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, Stateful-O3 -march=native-O3 -march=native -flto-O216003200480064008000Min: 9378 / Avg: 9389.33 / Max: 9402Min: 9389 / Avg: 9395 / Max: 9404Min: 9379 / Avg: 9380.67 / Max: 93841. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O20.79651.5932.38953.1863.9825SE +/- 0.00232, N = 3SE +/- 0.00020, N = 3SE +/- 0.00185, N = 33.537083.535003.54019MIN: 3.41-flto - MIN: 3.44MIN: 3.461. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O2246810Min: 3.53 / Avg: 3.54 / Max: 3.54Min: 3.53 / Avg: 3.53 / Max: 3.54Min: 3.54 / Avg: 3.54 / Max: 3.541. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPU-O3 -march=native-O3 -march=native -flto-O27K14K21K28K35KSE +/- 0.97, N = 3SE +/- 1.11, N = 3SE +/- 0.65, N = 334776.0834751.0134799.70-O3 -march=native-O3 -march=native -flto1. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm
OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPU-O3 -march=native-O3 -march=native -flto-O26K12K18K24K30KMin: 34774.61 / Avg: 34776.08 / Max: 34777.9Min: 34749.3 / Avg: 34751.01 / Max: 34753.09Min: 34798.51 / Avg: 34799.7 / Max: 34800.741. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O248121620SE +/- 0.00, N = 3SE +/- 0.00, N = 3SE +/- 0.00, N = 316.1816.1916.17MIN: 16.09-flto - MIN: 16.09MIN: 16.091. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O248121620Min: 16.18 / Avg: 16.18 / Max: 16.19Min: 16.18 / Avg: 16.19 / Max: 16.19Min: 16.17 / Avg: 16.17 / Max: 16.181. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O2246810SE +/- 0.00184, N = 3SE +/- 0.00390, N = 3SE +/- 0.00352, N = 38.575488.572488.57623MIN: 8.41-flto - MIN: 8.44MIN: 8.421. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl
OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU-O3 -march=native-O3 -march=native -flto-O23691215Min: 8.57 / Avg: 8.58 / Max: 8.58Min: 8.57 / Avg: 8.57 / Max: 8.58Min: 8.57 / Avg: 8.58 / Max: 8.581. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

GNU GMP GMPbench

GMPbench is a test of the GNU Multiple Precision Arithmetic (GMP) Library. GMPbench is a single-threaded integer benchmark that leverages the GMP library to stress the CPU with widening integer multiplication. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGMPbench Score, More Is BetterGNU GMP GMPbench 6.2.1Total Time-O3 -march=native-O3 -march=native -flto130026003900520065006172.96171.6-flto1. (CC) gcc options: -O3 -march=native -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface-O3 -march=native-O3 -march=native -flto-O20.38030.76061.14091.52121.9015SE +/- 0.06, N = 3SE +/- 0.01, N = 3SE +/- 0.01, N = 151.191.691.19-O3 -march=native - MIN: 1.09 / MAX: 2.02-O3 -march=native -flto - MIN: 1.64 / MAX: 2.46MIN: 1.14 / MAX: 5.671. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread
OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface-O3 -march=native-O3 -march=native -flto-O2246810Min: 1.1 / Avg: 1.19 / Max: 1.29Min: 1.68 / Avg: 1.69 / Max: 1.71Min: 1.15 / Avg: 1.19 / Max: 1.351. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

94 Results Shown

C-Ray
NCNN
dav1d
NCNN
LAME MP3 Encoding
NCNN:
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
  CPU-v3-v3 - mobilenet-v3
GraphicsMagick
NCNN:
  CPU - efficientnet-b0
  CPU - resnet50
GraphicsMagick
Opus Codec Encoding
NCNN
AOBench
GraphicsMagick
Himeno Benchmark
NCNN:
  CPU - regnety_400m
  CPU - googlenet
WebP Image Encode
Liquid-DSP
TNN
oneDNN
GraphicsMagick
ASTC Encoder
TNN
ASTC Encoder
eSpeak-NG Speech Engine
oneDNN
Zstd Compression:
  8, Long Mode - Compression Speed
  8, Long Mode - Decompression Speed
WebP Image Encode
Zstd Compression
libjpeg-turbo tjbench
oneDNN
Smallpt
Zstd Compression
SVT-HEVC
NCNN
LAMMPS Molecular Dynamics Simulator
SVT-VP9
Timed HMMer Search
Stockfish
WebP Image Encode
NCNN
Timed MrBayes Analysis
oneDNN:
  Deconvolution Batch shapes_1d - f32 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
x265
FLAC Audio Encoding
Zstd Compression
oneDNN:
  Recurrent Neural Network Inference - f32 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
SVT-VP9
PJSIP
dav1d
SVT-HEVC
SVT-VP9
Redis
Liquid-DSP
oneDNN:
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU
Quantum ESPRESSO
oneDNN
ASTC Encoder
dav1d:
  Summer Nature 1080p
  Chimera 1080p
Coremark
oneDNN:
  Deconvolution Batch shapes_3d - f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
NCNN
oneDNN:
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
  IP Shapes 3D - u8s8f32 - CPU
SQLite Speedtest
Zstd Compression
oneDNN
NCNN
PJSIP
oneDNN
Crypto++
Redis
oneDNN:
  IP Shapes 1D - f32 - CPU
  Deconvolution Batch shapes_3d - bf16bf16bf16 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
WavPack Audio Encoding
oneDNN
PJSIP
oneDNN
Sysbench
oneDNN:
  Convolution Batch Shapes Auto - bf16bf16bf16 - CPU
  IP Shapes 1D - bf16bf16bf16 - CPU
GNU GMP GMPbench
NCNN