Compiler Optimization Levels

Intel Core i9-11900K testing with a ASUS ROG MAXIMUS XIII HERO (0707 BIOS) and AMD Radeon VII 16GB on Fedora 34 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2106106-IB-COMPILERO67
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
-O3 -march=native
June 09 2021
  8 Hours, 5 Minutes
-O1
June 10 2021
  9 Hours, 19 Minutes
Invert Behavior (Only Show Selected Data)
  8 Hours, 42 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


Compiler Optimization LevelsOpenBenchmarking.orgPhoronix Test SuiteIntel Core i9-11900K @ 5.10GHz (8 Cores / 16 Threads)ASUS ROG MAXIMUS XIII HERO (0707 BIOS)Intel Tiger Lake-H32GB2000GB Corsair Force MP600 + 257GB Flash DriveAMD Radeon VII 16GB (1801/1000MHz)Intel Tiger Lake-H HD AudioASUS MG28U2 x Intel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411Fedora 345.12.9-300.fc34.x86_64 (x86_64)GNOME Shell 40.1X Server + Wayland4.6 Mesa 21.1.1 (LLVM 12.0.0)GCC 11.1.1 20210531btrfs3840x2160ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionCompiler Optimization Levels PerformanceSystem Logs- Transparent Huge Pages: madvise- -O3 -march=native: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native" - -O1: CXXFLAGS=-O1 CFLAGS=-O1- --build=x86_64-redhat-linux --disable-libunwind-exceptions --enable-__cxa_atexit --enable-bootstrap --enable-cet --enable-checking=release --enable-gnu-indirect-function --enable-gnu-unique-object --enable-initfini-array --enable-languages=c,c++,fortran,objc,obj-c++,ada,go,d,lto --enable-multilib --enable-offload-targets=nvptx-none --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-arch_32=i686 --with-gcc-major-version-only --with-linker-hash-style=gnu --with-tune=generic --without-cuda-driver - NONE / compress=zstd:1,relatime,rw,seclabel,space_cache,ssd,subvol=/home,subvolid=256 / Block Size: 4096 - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x3c - Thermald 2.4.4 - Python 3.9.5- SELinux + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

-O3 -march=native vs. -O1 ComparisonPhoronix Test SuiteBaseline+43.1%+43.1%+86.2%+86.2%+129.3%+129.3%8.8%6.3%6.3%5.7%5.6%3.3%Total Time - 4.1.R.P.P172.3%CPU - mnasnet42.8%WAV To MP340.2%CPU-v2-v2 - mobilenet-v230.5%CPU-v3-v3 - mobilenet-v328.1%CPU - mobilenet27.7%Enhanced23.9%CPU - efficientnet-b023.6%Keyed Algorithms23%CPU - resnet5022.3%WAV To Opus Encode22%Sharpen20.4%Resizing19.7%CoreMark Size 666 - I.P.S18.5%Swirl16.4%2 - 256 - 5716%8 - 256 - 5715.4%4 - 256 - 5714.9%2048 x 2048 - Total Time14.1%CPU - regnety_400m13.5%CPU - googlenet13%1 - 256 - 5712.9%CAST-25612.9%CAST-256 - Decrypt12.7%WAV To FLAC11%All Algorithms11%T.T.S.S10.3%S.F.P.RG.I.R.1.S8.7%CPU - blazeface7.8%Twofish7.8%16 - 256 - 577.5%KASUMI7%HWB Color Space6.5%3 - Compression Speed6.4%Timed Time - Size 1,0006.3%3, Long Mode - Compression SpeedStatic OMP SpeedupP.P.A6.1%CPU - shufflenet-v25.8%CPU - squeezenet_ssd5.8%Twofish - Decrypt5.7%AES-256CPU - MobileNet v25.7%AES-256 - DecryptSummer Nature 4K5.4%KASUMI - Decrypt5.2%CPU - yolov4-tiny5.2%I.E.C.P.K.A4.8%20k Atoms4.7%Thorough4.4%D.T4.4%P.D.S4.3%Rhodopsin Protein4%Blowfish - Decrypt3.9%Unkeyed Algorithms3.9%VMAF Optimized - Bosphorus 1080p3.8%CPU - SqueezeNet v1.13.7%V.Q.O - Bosphorus 1080p3.5%CPU - resnet183.5%Blowfish3.5%Medium3.4%MobileNetV2_2243.4%Exhaustive3.4%P.S.O - Bosphorus 1080p3.4%AUSURF1123 - D.S3.1%1 - Bosphorus 1080p3%OPTIONS, Stateless3%8, Long Mode - D.S2.9%SqueezeNetV1.02.7%10 - Bosphorus 1080p2.6%D.T.P2.6%3, Long Mode - D.S2.5%7 - Bosphorus 1080p2.3%19 - D.S2.3%8 - D.S2.2%mobilenet-v1-1.02%C-RayNCNNLAME MP3 EncodingNCNNNCNNNCNNGraphicsMagickNCNNCrypto++NCNNOpus Codec EncodingGraphicsMagickGraphicsMagickCoremarkGraphicsMagickLiquid-DSPLiquid-DSPLiquid-DSPAOBenchNCNNNCNNLiquid-DSPBotanBotanFLAC Audio EncodingCrypto++eSpeak-NG Speech EngineACES DGEMMSmallptNCNNBotanLiquid-DSPBotanGraphicsMagickZstd CompressionSQLite SpeedtestZstd CompressionCLOMPTimed MrBayes AnalysisNCNNNCNNBotanBotanTNNBotandav1dBotanNCNNCrypto++LAMMPS Molecular Dynamics SimulatorASTC Encoderlibjpeg-turbo tjbenchTimed HMMer SearchLAMMPS Molecular Dynamics SimulatorBotanCrypto++SVT-VP9TNNSVT-VP9NCNNBotanASTC EncoderMobile Neural NetworkASTC EncoderSVT-VP9Quantum ESPRESSOZstd CompressionSVT-HEVCPJSIPZstd CompressionMobile Neural NetworkSVT-HEVCPostMarkZstd CompressionSVT-HEVCZstd CompressionZstd CompressionMobile Neural Network-O3 -march=native-O1

Compiler Optimization Levelsc-ray: Total Time - 4K, 16 Rays Per Pixelncnn: CPU - mnasnetencode-mp3: WAV To MP3ncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - mobilenetgraphics-magick: Enhancedncnn: CPU - efficientnet-b0cryptopp: Keyed Algorithmsncnn: CPU - resnet50encode-opus: WAV To Opus Encodegraphics-magick: Sharpengraphics-magick: Resizingcoremark: CoreMark Size 666 - Iterations Per Secondgraphics-magick: Swirlliquid-dsp: 2 - 256 - 57liquid-dsp: 8 - 256 - 57liquid-dsp: 4 - 256 - 57aobench: 2048 x 2048 - Total Timencnn: CPU - regnety_400mncnn: CPU - googlenetliquid-dsp: 1 - 256 - 57botan: CAST-256botan: CAST-256 - Decryptencode-flac: WAV To FLACcryptopp: All Algorithmsespeak: Text-To-Speech Synthesismt-dgemm: Sustained Floating-Point Ratesmallpt: Global Illumination Renderer; 128 Samplesncnn: CPU - blazefacebotan: Twofishliquid-dsp: 16 - 256 - 57botan: KASUMIgraphics-magick: HWB Color Spacecompress-zstd: 3 - Compression Speedsqlite-speedtest: Timed Time - Size 1,000clomp: Static OMP Speedupmrbayes: Primate Phylogeny Analysisncnn: CPU - shufflenet-v2ncnn: CPU - squeezenet_ssdbotan: Twofish - Decryptbotan: AES-256tnn: CPU - MobileNet v2botan: AES-256 - Decryptdav1d: Summer Nature 4Kbotan: KASUMI - Decryptncnn: CPU - yolov4-tinycryptopp: Integer + Elliptic Curve Public Key Algorithmslammps: 20k Atomsastcenc: Thoroughtjbench: Decompression Throughputhmmer: Pfam Database Searchlammps: Rhodopsin Proteinbotan: Blowfish - Decryptcryptopp: Unkeyed Algorithmssvt-vp9: VMAF Optimized - Bosphorus 1080ptnn: CPU - SqueezeNet v1.1svt-vp9: Visual Quality Optimized - Bosphorus 1080pncnn: CPU - resnet18botan: Blowfishastcenc: Mediummnn: MobileNetV2_224astcenc: Exhaustivesvt-vp9: PSNR/SSIM Optimized - Bosphorus 1080pqe: AUSURF112compress-zstd: 3 - Decompression Speedsvt-hevc: 1 - Bosphorus 1080ppjsip: OPTIONS, Statelesscompress-zstd: 8, Long Mode - Decompression Speedmnn: SqueezeNetV1.0svt-hevc: 10 - Bosphorus 1080ppostmark: Disk Transaction Performancecompress-zstd: 3, Long Mode - Decompression Speedsvt-hevc: 7 - Bosphorus 1080pcompress-zstd: 19 - Decompression Speedcompress-zstd: 8 - Decompression Speedmnn: mobilenet-v1-1.0x265: Bosphorus 4Kmnn: inception-v3compress-zstd: 8 - Compression Speedredis: GETcompress-zstd: 8, Long Mode - Compression Speedonednn: IP Shapes 3D - f32 - CPUgraphics-magick: Rotatemnn: resnet-v2-50pjsip: INVITEcaffe: GoogleNet - CPU - 100graphics-magick: Noise-Gaussianonednn: Recurrent Neural Network Inference - f32 - CPUchia-vdf: Square Assembly Optimizedonednn: Recurrent Neural Network Training - f32 - CPUncnn: CPU - vgg16compress-zstd: 19, Long Mode - Decompression Speedonednn: Convolution Batch Shapes Auto - f32 - CPUkripke: botan: ChaCha20Poly1305botan: ChaCha20Poly1305 - Decryptpjsip: OPTIONS, Statefulchia-vdf: Square Plain C++sysbench: CPUencode-wavpack: WAV To WavPackcompress-zstd: 19, Long Mode - Compression Speedonednn: IP Shapes 1D - f32 - CPUredis: SETncnn: CPU - alexnetonednn: Deconvolution Batch shapes_1d - f32 - CPUbasis: ETC1Sonednn: Deconvolution Batch shapes_3d - f32 - CPUcaffe: AlexNet - CPU - 100basis: UASTC Level 0basis: UASTC Level 2basis: UASTC Level 3stockfish: Total Timeonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUx265: Bosphorus 1080pcompress-zstd: 19 - Compression Speedgmpbench: Total Timecompress-zstd: 3, Long Mode - Compression Speed-O3 -march=native-O147.3352.225.4733.212.4911.762704.24924.21291118.235.5951951222434724.84974468918800333368784666736376000021.5568.5710.0999844333168.756168.8515.9372346.35907421.7653.6046418.4011.15464.472722756667115.81612852731.546.0874.883.4303.2615.29451.6608401.852230.1138412.961195.94112.02720.217194.8571048.7379.3601271.67666499.4848.513553.519491.454981198.73227.455166.4311.08552.4634.21531.91651.4853204.962609.024997.89.482546105542.93.748279.1294965346.0140.404506.55189.91.88316.0222.513192.64049394.67285.911.2002109419.2245060836253101876.422506333165.6054.364540.614.2754335443571012.7321010.787937520840034770.1411.09832.84.037812956462.009.644.9828120.8084.28984365586.10629.13854.586294431123.5248567.8535.46171.81451.0128.9073.177.6754.193.1915.022185.24751.48152122.296.8281621021366951.48429059216204666759581666731671000024.6059.7311.4088411000149.439149.8076.5902114.62461324.0013.9222249.1331.24430.951672296667108.27612072568.049.0115.188.5333.4516.18427.2558879.330243.1628885.129185.95106.47821.266862.7866208.3459.7734260.256611103.7428.184532.560472.947089191.41235.963160.7311.47533.9564.36061.98253.2528198.182525.864847.59.202471065385.73.848271.9992595215.3137.234406.45075.81.92115.7222.942189.23982525.83281.511.0289107819.5074993847293061854.402479333133.2854.914506.014.1700337907531019.9131004.647933320923334882.1411.13232.94.048282962660.839.624.9728820.8454.28224366226.11429.10854.557294480173.5249967.8535.41542.8OpenBenchmarking.org

C-Ray

This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterC-Ray 1.1Total Time - 4K, 16 Rays Per Pixel-O1-O3 -march=native306090120150SE +/- 0.06, N = 3SE +/- 0.15, N = 3128.9147.34-O1-march=native1. (CC) gcc options: -lm -lpthread -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mnasnet-O1-O3 -march=native0.71331.42662.13992.85323.5665SE +/- 0.01, N = 3SE +/- 0.02, N = 33.172.22-O1 - MIN: 3.14 / MAX: 6.8-O3 -march=native - MIN: 2.17 / MAX: 2.351. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

LAME MP3 Encoding

LAME is an MP3 encoder licensed under the LGPL. This test measures the time required to encode a WAV file to MP3 format. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterLAME MP3 Encoding 3.100WAV To MP3-O1-O3 -march=native246810SE +/- 0.092, N = 4SE +/- 0.008, N = 37.6755.473-O1-march=native1. (CC) gcc options: -O3 -ffast-math -funroll-loops -fschedule-insns2 -fbranch-count-reg -fforce-addr -pipe -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v2-v2 - Model: mobilenet-v2-O1-O3 -march=native0.94281.88562.82843.77124.714SE +/- 0.01, N = 3SE +/- 0.01, N = 34.193.21-O1 - MIN: 4.06 / MAX: 7.81-O3 -march=native - MIN: 3.08 / MAX: 4.111. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU-v3-v3 - Model: mobilenet-v3-O1-O3 -march=native0.71781.43562.15342.87123.589SE +/- 0.01, N = 3SE +/- 0.00, N = 33.192.49-O1 - MIN: 3.16 / MAX: 4.05-O3 -march=native - MIN: 2.44 / MAX: 6.141. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: mobilenet-O1-O3 -march=native48121620SE +/- 0.00, N = 3SE +/- 0.06, N = 315.0211.76-O1 - MIN: 14.88 / MAX: 18.66-O3 -march=native - MIN: 11.54 / MAX: 15.411. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Enhanced-O1-O3 -march=native60120180240300SE +/- 0.33, N = 3218270-O1-O3 -march=native1. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: efficientnet-b0-O1-O3 -march=native1.1792.3583.5374.7165.895SE +/- 0.01, N = 3SE +/- 0.01, N = 35.244.24-O1 - MIN: 5.17 / MAX: 8.84-O3 -march=native - MIN: 4.19 / MAX: 7.91. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Keyed Algorithms-O1-O3 -march=native2004006008001000SE +/- 0.51, N = 3SE +/- 0.64, N = 3751.48924.21-O1-O3 -march=native1. (CXX) g++ options: -fPIC -pthread -pipe

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet50-O1-O3 -march=native510152025SE +/- 0.03, N = 3SE +/- 0.15, N = 322.2918.23-O1 - MIN: 22.02 / MAX: 27-O3 -march=native - MIN: 17.79 / MAX: 22.111. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

Opus Codec Encoding

Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.3.1WAV To Opus Encode-O1-O3 -march=native246810SE +/- 0.004, N = 5SE +/- 0.010, N = 56.8285.595-O1-O3 -march=native1. (CXX) g++ options: -fvisibility=hidden -logg -lm

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Sharpen-O1-O3 -march=native4080120160200SE +/- 0.58, N = 3SE +/- 0.58, N = 3162195-O1-O3 -march=native1. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Resizing-O1-O3 -march=native30060090012001500SE +/- 1.00, N = 3SE +/- 2.33, N = 310211222-O1-O3 -march=native1. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread

Coremark

This is a test of EEMBC CoreMark processor benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations/Sec, More Is BetterCoremark 1.0CoreMark Size 666 - Iterations Per Second-O1-O3 -march=native90K180K270K360K450KSE +/- 661.73, N = 3SE +/- 533.17, N = 3366951.48434724.85-O1-O3 -march=native1. (CC) gcc options: -O2 -lrt" -lrt

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Swirl-O1-O3 -march=native150300450600750SE +/- 1.00, N = 3SE +/- 2.67, N = 3592689-O1-O3 -march=native1. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 2 - Buffer Length: 256 - Filter Length: 57-O1-O3 -march=native40M80M120M160M200MSE +/- 601728.99, N = 3SE +/- 66416.20, N = 3162046667188003333-O1-march=native1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 8 - Buffer Length: 256 - Filter Length: 57-O1-O3 -march=native150M300M450M600M750MSE +/- 736168.76, N = 3SE +/- 689597.31, N = 3595816667687846667-O1-march=native1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 4 - Buffer Length: 256 - Filter Length: 57-O1-O3 -march=native80M160M240M320M400MSE +/- 132035.35, N = 3SE +/- 1410968.93, N = 3316710000363760000-O1-march=native1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

AOBench

AOBench is a lightweight ambient occlusion renderer, written in C. The test profile is using a size of 2048 x 2048. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterAOBenchSize: 2048 x 2048 - Total Time-O1-O3 -march=native612182430SE +/- 0.04, N = 3SE +/- 0.01, N = 324.6121.56-O1-march=native1. (CC) gcc options: -lm -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: regnety_400m-O1-O3 -march=native3691215SE +/- 0.05, N = 3SE +/- 0.02, N = 39.738.57-O1 - MIN: 9.55 / MAX: 14.41-O3 -march=native - MIN: 8.47 / MAX: 12.351. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: googlenet-O1-O3 -march=native3691215SE +/- 0.02, N = 3SE +/- 0.17, N = 311.4010.09-O1 - MIN: 11.29 / MAX: 14.99-O3 -march=native - MIN: 9.67 / MAX: 13.941. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 1 - Buffer Length: 256 - Filter Length: 57-O1-O3 -march=native20M40M60M80M100MSE +/- 6806.86, N = 3SE +/- 14836.14, N = 38841100099844333-O1-march=native1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256-O1-O3 -march=native4080120160200SE +/- 1.37, N = 15SE +/- 0.06, N = 3149.44168.761. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: CAST-256 - Decrypt-O1-O3 -march=native4080120160200SE +/- 1.14, N = 15SE +/- 0.01, N = 3149.81168.851. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

FLAC Audio Encoding

This test times how long it takes to encode a sample WAV file to FLAC format five times. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterFLAC Audio Encoding 1.3.2WAV To FLAC-O1-O3 -march=native246810SE +/- 0.004, N = 5SE +/- 0.002, N = 56.5905.937-O1-O3 -march=native1. (CXX) g++ options: -fvisibility=hidden -logg -lm

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: All Algorithms-O1-O3 -march=native5001000150020002500SE +/- 0.38, N = 3SE +/- 1.51, N = 32114.622346.36-O1-O3 -march=native1. (CXX) g++ options: -fPIC -pthread -pipe

eSpeak-NG Speech Engine

This test times how long it takes the eSpeak speech synthesizer to read Project Gutenberg's The Outline of Science and output to a WAV file. This test profile is now tracking the eSpeak-NG version of eSpeak. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereSpeak-NG Speech Engine 20200907Text-To-Speech Synthesis-O1-O3 -march=native612182430SE +/- 0.07, N = 4SE +/- 0.06, N = 424.0021.77-O1-O3 -march=native1. (CC) gcc options: -std=c99 -lpthread -lm

ACES DGEMM

This is a multi-threaded DGEMM benchmark. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGFLOP/s, More Is BetterACES DGEMM 1.0Sustained Floating-Point Rate-O1-O3 -march=native0.88251.7652.64753.534.4125SE +/- 0.023378, N = 3SE +/- 0.018800, N = 33.9222243.604641-O11. (CC) gcc options: -O3 -march=native -fopenmp

Smallpt

Smallpt is a C++ global illumination renderer written in less than 100 lines of code. Global illumination is done via unbiased Monte Carlo path tracing and there is multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSmallpt 1.0Global Illumination Renderer; 128 Samples-O1-O3 -march=native3691215SE +/- 0.002, N = 3SE +/- 0.009, N = 39.1338.401-O1-march=native1. (CXX) g++ options: -fopenmp -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: blazeface-O1-O3 -march=native0.2790.5580.8371.1161.395SE +/- 0.01, N = 3SE +/- 0.03, N = 31.241.15-O1 - MIN: 1.21 / MAX: 5.59-O3 -march=native - MIN: 1.08 / MAX: 21. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Twofish-O1-O3 -march=native100200300400500SE +/- 0.19, N = 3SE +/- 0.31, N = 3430.95464.471. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Liquid-DSP

LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgsamples/s, More Is BetterLiquid-DSP 2021.01.31Threads: 16 - Buffer Length: 256 - Filter Length: 57-O1-O3 -march=native150M300M450M600M750MSE +/- 328295.26, N = 3SE +/- 134824.99, N = 3672296667722756667-O1-march=native1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMI-O1-O3 -march=native306090120150SE +/- 0.03, N = 3SE +/- 0.01, N = 3108.28115.821. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: HWB Color Space-O1-O3 -march=native30060090012001500SE +/- 1.33, N = 3SE +/- 1.20, N = 312071285-O1-O3 -march=native1. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Compression Speed-O1-O3 -march=native6001200180024003000SE +/- 8.18, N = 3SE +/- 14.92, N = 32568.02731.5-O1-O3 -march=native1. (CC) gcc options: -pthread -lz

SQLite Speedtest

This is a benchmark of SQLite's speedtest1 benchmark program with an increased problem size of 1,000. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterSQLite Speedtest 3.30Timed Time - Size 1,000-O1-O3 -march=native1122334455SE +/- 0.26, N = 3SE +/- 0.15, N = 349.0146.09-O1-O3 -march=native1. (CC) gcc options: -ldl -lz -lpthread

CLOMP

CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSpeedup, More Is BetterCLOMP 1.2Static OMP Speedup-O1-O3 -march=native1.14752.2953.44254.595.7375SE +/- 0.06, N = 3SE +/- 0.07, N = 35.14.8-O1-march=native1. (CC) gcc options: -fopenmp -O3 -lm

Timed MrBayes Analysis

This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MrBayes Analysis 3.2.7Primate Phylogeny Analysis-O1-O3 -march=native20406080100SE +/- 0.17, N = 3SE +/- 0.09, N = 388.5383.43-O1-march=native1. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msha -maes -mavx -mfma -mavx2 -mavx512f -mavx512cd -mavx512vl -mavx512bw -mavx512dq -mavx512ifma -mavx512vbmi -mrdrnd -mbmi -mbmi2 -madx -mmpx -mabm -O3 -std=c99 -pedantic -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: shufflenet-v2-O1-O3 -march=native0.77631.55262.32893.10523.8815SE +/- 0.01, N = 3SE +/- 0.01, N = 33.453.26-O1 - MIN: 3.39 / MAX: 7.07-O3 -march=native - MIN: 3.18 / MAX: 6.941. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: squeezenet_ssd-O1-O3 -march=native48121620SE +/- 0.03, N = 3SE +/- 0.01, N = 316.1815.29-O1 - MIN: 16.02 / MAX: 19.89-O3 -march=native - MIN: 15.14 / MAX: 191. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Twofish - Decrypt-O1-O3 -march=native100200300400500SE +/- 0.13, N = 3SE +/- 0.62, N = 3427.26451.661. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256-O1-O3 -march=native2K4K6K8K10KSE +/- 0.64, N = 3SE +/- 5.18, N = 38879.338401.851. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: MobileNet v2-O1-O3 -march=native50100150200250SE +/- 0.20, N = 3SE +/- 0.06, N = 3243.16230.11-O1 - MIN: 241.63 / MAX: 246.21-O3 -march=native - MIN: 229.52 / MAX: 232.811. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O2 -rdynamic -ldl

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: AES-256 - Decrypt-O1-O3 -march=native2K4K6K8K10KSE +/- 2.06, N = 3SE +/- 5.34, N = 38885.138412.961. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 0.9.0Video Input: Summer Nature 4K-O1-O3 -march=native4080120160200SE +/- 0.05, N = 3SE +/- 0.19, N = 3185.95195.94-O1 - MIN: 169.98 / MAX: 195.75-O3 -march=native - MIN: 181.35 / MAX: 208.711. (CC) gcc options: -pthread -lm

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: KASUMI - Decrypt-O1-O3 -march=native306090120150SE +/- 0.06, N = 3SE +/- 0.05, N = 3106.48112.031. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: yolov4-tiny-O1-O3 -march=native510152025SE +/- 0.05, N = 3SE +/- 0.03, N = 321.2620.21-O1 - MIN: 20.97 / MAX: 27.08-O3 -march=native - MIN: 20.03 / MAX: 23.861. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Integer + Elliptic Curve Public Key Algorithms-O1-O3 -march=native15003000450060007500SE +/- 4.50, N = 3SE +/- 1.75, N = 36862.797194.86-O1-O3 -march=native1. (CXX) g++ options: -fPIC -pthread -pipe

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: 20k Atoms-O1-O3 -march=native246810SE +/- 0.035, N = 3SE +/- 0.020, N = 38.3458.737-O1-O3 -march=native1. (CXX) g++ options: -O2 -pthread -lm

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.0Preset: Thorough-O1-O3 -march=native3691215SE +/- 0.0228, N = 3SE +/- 0.0151, N = 39.77349.3601-O1-O3 -march=native1. (CXX) g++ options: -O2 -flto -pthread

libjpeg-turbo tjbench

tjbench is a JPEG decompression/compression benchmark that is part of libjpeg-turbo, a JPEG image codec library optimized for SIMD instructions on modern CPU architectures. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMegapixels/sec, More Is Betterlibjpeg-turbo tjbench 2.1.0Test: Decompression Throughput-O1-O3 -march=native60120180240300SE +/- 0.31, N = 3SE +/- 0.45, N = 3260.26271.68-O1-march=native1. (CC) gcc options: -O3 -rdynamic

Timed HMMer Search

This test searches through the Pfam database of profile hidden markov models. The search finds the domain structure of Drosophila Sevenless protein. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed HMMer Search 3.3.2Pfam Database Search-O1-O3 -march=native20406080100SE +/- 0.08, N = 3SE +/- 0.04, N = 3103.7499.48-O1-O3 -march=native1. (CC) gcc options: -pthread -lhmmer -leasel -lm -lmpi

LAMMPS Molecular Dynamics Simulator

LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgns/day, More Is BetterLAMMPS Molecular Dynamics Simulator 29Oct2020Model: Rhodopsin Protein-O1-O3 -march=native246810SE +/- 0.028, N = 3SE +/- 0.026, N = 38.1848.513-O1-O3 -march=native1. (CXX) g++ options: -O2 -pthread -lm

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Blowfish - Decrypt-O1-O3 -march=native120240360480600SE +/- 1.04, N = 3SE +/- 0.26, N = 3532.56553.521. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

Crypto++

Crypto++ is a C++ class library of cryptographic algorithms. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/second, More Is BetterCrypto++ 8.2Test: Unkeyed Algorithms-O1-O3 -march=native110220330440550SE +/- 0.06, N = 3SE +/- 0.05, N = 3472.95491.45-O1-O3 -march=native1. (CXX) g++ options: -fPIC -pthread -pipe

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: VMAF Optimized - Input: Bosphorus 1080p-O1-O3 -march=native4080120160200SE +/- 1.54, N = 9SE +/- 1.49, N = 10191.41198.73-O1-march=native1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm

TNN

TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterTNN 0.2.3Target: CPU - Model: SqueezeNet v1.1-O1-O3 -march=native50100150200250SE +/- 0.15, N = 3SE +/- 0.04, N = 3235.96227.46-O1 - MIN: 234.76 / MAX: 237.84-O3 -march=native - MIN: 226.88 / MAX: 228.231. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O2 -rdynamic -ldl

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: Visual Quality Optimized - Input: Bosphorus 1080p-O1-O3 -march=native4080120160200SE +/- 0.29, N = 3SE +/- 0.27, N = 3160.73166.43-O1-march=native1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: resnet18-O1-O3 -march=native3691215SE +/- 0.01, N = 3SE +/- 0.14, N = 311.4711.08-O1 - MIN: 11.34 / MAX: 15.37-O3 -march=native - MIN: 10.69 / MAX: 16.911. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: Blowfish-O1-O3 -march=native120240360480600SE +/- 0.93, N = 3SE +/- 0.20, N = 3533.96552.461. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.0Preset: Medium-O1-O3 -march=native0.98111.96222.94333.92444.9055SE +/- 0.0112, N = 3SE +/- 0.0026, N = 34.36064.2153-O1-O3 -march=native1. (CXX) g++ options: -O2 -flto -pthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: MobileNetV2_224-O1-O3 -march=native0.4460.8921.3381.7842.23SE +/- 0.011, N = 3SE +/- 0.008, N = 31.9821.916-O1 - MIN: 1.93 / MAX: 7.73-march=native - MIN: 1.87 / MAX: 6.221. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -O2 -rdynamic -pthread -ldl

ASTC Encoder

ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterASTC Encoder 3.0Preset: Exhaustive-O1-O3 -march=native1224364860SE +/- 0.02, N = 3SE +/- 0.04, N = 353.2551.49-O1-O3 -march=native1. (CXX) g++ options: -O2 -flto -pthread

SVT-VP9

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample YUV input video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-VP9 0.3Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p-O1-O3 -march=native4080120160200SE +/- 0.07, N = 3SE +/- 0.17, N = 3198.18204.96-O1-march=native1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -O2 -pie -rdynamic -lpthread -lrt -lm

Quantum ESPRESSO

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterQuantum ESPRESSO 6.7Input: AUSURF112-O1-O3 -march=native6001200180024003000SE +/- 27.81, N = 5SE +/- 5.73, N = 32525.862609.021. (F9X) gfortran options: -lopenblas -lFoX_dom -lFoX_sax -lFoX_wxml -lFoX_common -lFoX_utils -lFoX_fsys -lfftw3 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3 - Decompression Speed-O1-O3 -march=native11002200330044005500SE +/- 8.75, N = 3SE +/- 19.31, N = 34847.54997.8-O1-O3 -march=native1. (CC) gcc options: -pthread -lz

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 1 - Input: Bosphorus 1080p-O1-O3 -march=native3691215SE +/- 0.01, N = 3SE +/- 0.01, N = 39.209.48-O1-march=native1. (CC) gcc options: -fPIE -fPIC -O2 -O3 -pie -rdynamic -lpthread -lrt

PJSIP

PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, Stateless-O1-O3 -march=native50K100K150K200K250KSE +/- 520.47, N = 3SE +/- 711.03, N = 3247106254610-O1-O3 -march=native1. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Decompression Speed-O1-O3 -march=native12002400360048006000SE +/- 9.52, N = 3SE +/- 6.10, N = 155385.75542.9-O1-O3 -march=native1. (CC) gcc options: -pthread -lz

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: SqueezeNetV1.0-O1-O3 -march=native0.86581.73162.59743.46324.329SE +/- 0.019, N = 3SE +/- 0.024, N = 33.8483.748-O1 - MIN: 3.75 / MAX: 8.08-march=native - MIN: 3.64 / MAX: 10.51. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -O2 -rdynamic -pthread -ldl

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 10 - Input: Bosphorus 1080p-O1-O3 -march=native60120180240300SE +/- 0.19, N = 3SE +/- 0.60, N = 3271.99279.12-O1-march=native1. (CC) gcc options: -fPIE -fPIC -O2 -O3 -pie -rdynamic -lpthread -lrt

PostMark

This is a test of NetApp's PostMark benchmark designed to simulate small-file testing similar to the tasks endured by web and mail servers. This test profile will set PostMark to perform 25,000 transactions with 500 files simultaneously with the file sizes ranging between 5 and 512 kilobytes. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgTPS, More Is BetterPostMark 1.51Disk Transaction Performance-O1-O3 -march=native2K4K6K8K10KSE +/- 118.67, N = 392599496-O1-march=native1. (CC) gcc options: -O3

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Decompression Speed-O1-O3 -march=native11002200330044005500SE +/- 8.30, N = 3SE +/- 2.50, N = 155215.35346.0-O1-O3 -march=native1. (CC) gcc options: -pthread -lz

SVT-HEVC

This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-HEVC CPU-based multi-threaded video encoder for the HEVC / H.265 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-HEVC 1.5.0Tuning: 7 - Input: Bosphorus 1080p-O1-O3 -march=native306090120150SE +/- 0.28, N = 3SE +/- 0.11, N = 3137.23140.40-O1-march=native1. (CC) gcc options: -fPIE -fPIC -O2 -O3 -pie -rdynamic -lpthread -lrt

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Decompression Speed-O1-O3 -march=native10002000300040005000SE +/- 6.02, N = 4SE +/- 18.10, N = 34406.44506.5-O1-O3 -march=native1. (CC) gcc options: -pthread -lz

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Decompression Speed-O1-O3 -march=native11002200330044005500SE +/- 13.17, N = 3SE +/- 15.26, N = 35075.85189.9-O1-O3 -march=native1. (CC) gcc options: -pthread -lz

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: mobilenet-v1-1.0-O1-O3 -march=native0.43220.86441.29661.72882.161SE +/- 0.004, N = 3SE +/- 0.001, N = 31.9211.883-O1 - MIN: 1.89 / MAX: 9.19-march=native - MIN: 1.85 / MAX: 7.811. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -O2 -rdynamic -pthread -ldl

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 4K-O1-O3 -march=native48121620SE +/- 0.17, N = 4SE +/- 0.12, N = 315.7216.02-O1-O3 -march=native1. (CXX) g++ options: -O2 -rdynamic -lpthread -lrt -ldl

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: inception-v3-O1-O3 -march=native510152025SE +/- 0.01, N = 3SE +/- 0.02, N = 322.9422.51-O1 - MIN: 22.65 / MAX: 29.53-march=native - MIN: 22.19 / MAX: 27.641. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -O2 -rdynamic -pthread -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8 - Compression Speed-O1-O3 -march=native4080120160200SE +/- 0.57, N = 3SE +/- 0.90, N = 3189.2192.6-O1-O3 -march=native1. (CC) gcc options: -pthread -lz

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: GET-O1-O3 -march=native900K1800K2700K3600K4500KSE +/- 33158.80, N = 3SE +/- 18099.88, N = 33982525.834049394.67-O1-march=native1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 8, Long Mode - Compression Speed-O1-O3 -march=native60120180240300SE +/- 2.78, N = 3SE +/- 2.25, N = 15281.5285.9-O1-O3 -march=native1. (CC) gcc options: -pthread -lz

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU-O1-O3 -march=native3691215SE +/- 0.01, N = 3SE +/- 0.00, N = 311.0311.20-O1 - MIN: 10.93MIN: 11.111. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Rotate-O1-O3 -march=native2004006008001000SE +/- 1.20, N = 3SE +/- 2.03, N = 310781094-O1-O3 -march=native1. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread

Mobile Neural Network

MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 1.1.3Model: resnet-v2-50-O1-O3 -march=native510152025SE +/- 0.02, N = 3SE +/- 0.02, N = 319.5119.22-O1 - MIN: 19.33 / MAX: 23.75-march=native - MIN: 19.06 / MAX: 24.921. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -O2 -rdynamic -pthread -ldl

PJSIP

PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: INVITE-O1-O3 -march=native11002200330044005500SE +/- 45.51, N = 3SE +/- 15.24, N = 349935060-O1-O3 -march=native1. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: GoogleNet - Acceleration: CPU - Iterations: 100-O1-O3 -march=native20K40K60K80K100KSE +/- 10.17, N = 3SE +/- 43.97, N = 38472983625-O1-O3 -march=native1. (CXX) g++ options: -fPIC -O2 -rdynamic -lboost_system -lboost_thread -lboost_filesystem -lboost_chrono -lboost_date_time -lboost_atomic -lglog -lgflags -lprotobuf -lpthread -lhdf5_cpp -lhdf5 -lhdf5_hl_cpp -lhdf5_hl -llmdb -lopenblas

GraphicsMagick

This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIterations Per Minute, More Is BetterGraphicsMagick 1.3.33Operation: Noise-Gaussian-O1-O3 -march=native70140210280350SE +/- 0.88, N = 3306310-O1-O3 -march=native1. (CC) gcc options: -fopenmp -pthread -ljpeg -lz -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU-O1-O3 -march=native400800120016002000SE +/- 4.14, N = 3SE +/- 1.46, N = 31854.401876.42-O1 - MIN: 1837.76MIN: 1865.181. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

Chia Blockchain VDF

Chia is a blockchain and smart transaction platform based on proofs of space and time rather than proofs of work with other cryptocurrencies. This test profile is benchmarking the CPU performance for Chia VDF performance using the Chia VDF benchmark. The Chia VDF is for the Chia Verifiable Delay Function (Proof of Time). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.1Test: Square Assembly Optimized-O1-O3 -march=native50K100K150K200K250KSE +/- 1471.21, N = 3SE +/- 1105.04, N = 32479332506331. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU-O1-O3 -march=native7001400210028003500SE +/- 2.80, N = 3SE +/- 1.32, N = 33133.283165.60-O1 - MIN: 3120.48MIN: 3154.251. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: vgg16-O1-O3 -march=native1224364860SE +/- 0.09, N = 3SE +/- 0.11, N = 354.9154.36-O1 - MIN: 54.36 / MAX: 58.94-O3 -march=native - MIN: 53.85 / MAX: 59.241. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Decompression Speed-O1-O3 -march=native10002000300040005000SE +/- 3.19, N = 3SE +/- 15.31, N = 34506.04540.6-O1-O3 -march=native1. (CC) gcc options: -pthread -lz

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU-O1-O3 -march=native48121620SE +/- 0.02, N = 3SE +/- 0.01, N = 314.1714.28-O1 - MIN: 14.04MIN: 14.181. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

Kripke

Kripke is a simple, scalable, 3D Sn deterministic particle transport code. Its primary purpose is to research how data layout, programming paradigms and architectures effect the implementation and performance of Sn transport. Kripke is developed by LLNL. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgThroughput FoM, More Is BetterKripke 1.2.4-O1-O3 -march=native7M14M21M28M35MSE +/- 73388.54, N = 3SE +/- 88809.55, N = 33379075333544357-O1-O3 -march=native1. (CXX) g++ options: -O2 -fopenmp

Botan

Botan is a BSD-licensed cross-platform open-source C++ crypto library "cryptography toolkit" that supports most publicly known cryptographic algorithms. The project's stated goal is to be "the best option for cryptography in C++ by offering the tools necessary to implement a range of practical systems, such as TLS protocol, X.509 certificates, modern AEAD ciphers, PKCS#11 and TPM hardware support, password hashing, and post quantum crypto schemes." Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305-O1-O3 -march=native2004006008001000SE +/- 1.88, N = 3SE +/- 0.46, N = 31019.911012.731. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

OpenBenchmarking.orgMiB/s, More Is BetterBotan 2.17.3Test: ChaCha20Poly1305 - Decrypt-O1-O3 -march=native2004006008001000SE +/- 1.73, N = 3SE +/- 0.23, N = 31004.651010.791. (CXX) g++ options: -fstack-protector -m64 -pthread -lbotan-2 -ldl -lrt

PJSIP

PJSIP is a free and open source multimedia communication library written in C language implementing standard based protocols such as SIP, SDP, RTP, STUN, TURN, and ICE. It combines signaling protocol (SIP) with rich multimedia framework and NAT traversal functionality into high level API that is portable and suitable for almost any type of systems ranging from desktops, embedded systems, to mobile handsets. This test profile is making use of pjsip-perf with both the client/server on teh system. More details on the PJSIP benchmark at https://www.pjsip.org/high-performance-sip.htm Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgResponses Per Second, More Is BetterPJSIP 2.11Method: OPTIONS, Stateful-O1-O3 -march=native2K4K6K8K10KSE +/- 4.41, N = 3SE +/- 7.69, N = 393339375-O1-O3 -march=native1. (CC) gcc options: -lstdc++ -lssl -lcrypto -lm -lrt -lpthread

Chia Blockchain VDF

Chia is a blockchain and smart transaction platform based on proofs of space and time rather than proofs of work with other cryptocurrencies. This test profile is benchmarking the CPU performance for Chia VDF performance using the Chia VDF benchmark. The Chia VDF is for the Chia Verifiable Delay Function (Proof of Time). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgIPS, More Is BetterChia Blockchain VDF 1.0.1Test: Square Plain C++-O1-O3 -march=native40K80K120K160K200KSE +/- 120.19, N = 3SE +/- 57.74, N = 32092332084001. (CXX) g++ options: -flto -no-pie -lgmpxx -lgmp -lboost_system -pthread

Sysbench

This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgEvents Per Second, More Is BetterSysbench 1.0.20Test: CPU-O1-O3 -march=native7K14K21K28K35KSE +/- 6.87, N = 3SE +/- 2.38, N = 334882.1434770.14-O1-O3 -march=native1. (CC) gcc options: -pthread -O2 -funroll-loops -rdynamic -ldl -laio -lm

WavPack Audio Encoding

This test times how long it takes to encode a sample WAV file to WavPack format with very high quality settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterWavPack Audio Encoding 5.3WAV To WavPack-O1-O3 -march=native3691215SE +/- 0.01, N = 5SE +/- 0.00, N = 511.1311.10-O1-O3 -march=native1. (CXX) g++ options: -rdynamic

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19, Long Mode - Compression Speed-O1-O3 -march=native816243240SE +/- 0.15, N = 3SE +/- 0.19, N = 332.932.8-O1-O3 -march=native1. (CC) gcc options: -pthread -lz

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU-O1-O3 -march=native0.91091.82182.73273.64364.5545SE +/- 0.00076, N = 3SE +/- 0.00473, N = 34.048284.03781-O1 - MIN: 3.91MIN: 3.921. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

Redis

Redis is an open-source in-memory data structure store, used as a database, cache, and message broker. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgRequests Per Second, More Is BetterRedis 6.0.9Test: SET-O1-O3 -march=native600K1200K1800K2400K3000KSE +/- 19439.73, N = 3SE +/- 33577.98, N = 32962660.832956462.00-O1-march=native1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20201218Target: CPU - Model: alexnet-O1-O3 -march=native3691215SE +/- 0.01, N = 3SE +/- 0.02, N = 39.629.64-O1 - MIN: 9.5 / MAX: 13.21-O3 -march=native - MIN: 9.53 / MAX: 13.241. (CXX) g++ options: -O2 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU-O1-O3 -march=native1.12112.24223.36334.48445.6055SE +/- 0.01654, N = 3SE +/- 0.01117, N = 34.972884.98281-O1 - MIN: 3.81MIN: 3.811. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: ETC1S-O1-O3 -march=native510152025SE +/- 0.03, N = 3SE +/- 0.02, N = 320.8520.811. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O2 -rdynamic -lm -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU-O1-O3 -march=native0.96521.93042.89563.86084.826SE +/- 0.00335, N = 3SE +/- 0.00621, N = 34.282244.28984-O1 - MIN: 4.17MIN: 4.171. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

Caffe

This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMilli-Seconds, Fewer Is BetterCaffe 2020-02-13Model: AlexNet - Acceleration: CPU - Iterations: 100-O1-O3 -march=native8K16K24K32K40KSE +/- 14.01, N = 3SE +/- 51.83, N = 33662236558-O1-O3 -march=native1. (CXX) g++ options: -fPIC -O2 -rdynamic -lboost_system -lboost_thread -lboost_filesystem -lboost_chrono -lboost_date_time -lboost_atomic -lglog -lgflags -lprotobuf -lpthread -lhdf5_cpp -lhdf5 -lhdf5_hl_cpp -lhdf5_hl -llmdb -lopenblas

Basis Universal

Basis Universal is a GPU texture codec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 0-O1-O3 -march=native246810SE +/- 0.005, N = 3SE +/- 0.002, N = 36.1146.1061. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O2 -rdynamic -lm -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 2-O1-O3 -march=native714212835SE +/- 0.08, N = 3SE +/- 0.08, N = 329.1129.141. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O2 -rdynamic -lm -lpthread

OpenBenchmarking.orgSeconds, Fewer Is BetterBasis Universal 1.13Settings: UASTC Level 3-O1-O3 -march=native1224364860SE +/- 0.02, N = 3SE +/- 0.00, N = 354.5654.591. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O2 -rdynamic -lm -lpthread

Stockfish

This is a test of Stockfish, an advanced open-source C++11 chess benchmark that can scale up to 512 CPU threads. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 13Total Time-O1-O3 -march=native6M12M18M24M30MSE +/- 371064.83, N = 3SE +/- 193823.90, N = 32944801729443112-O1-march=native1. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -mavx2 -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -fprofile-use -fno-peel-loops -fno-tracer -flto=jobserver

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.1.2Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU-O1-O3 -march=native0.79311.58622.37933.17243.9655SE +/- 0.00163, N = 3SE +/- 0.00042, N = 33.524993.52485-O1 - MIN: 3.45MIN: 3.461. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -O2 -pie -lpthread -ldl

x265

This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is Betterx265 3.4Video Input: Bosphorus 1080p-O1-O3 -march=native1530456075SE +/- 0.60, N = 3SE +/- 0.32, N = 367.8567.85-O1-O3 -march=native1. (CXX) g++ options: -O2 -rdynamic -lpthread -lrt -ldl

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 19 - Compression Speed-O1-O3 -march=native816243240SE +/- 0.43, N = 4SE +/- 0.48, N = 335.435.4-O1-O3 -march=native1. (CC) gcc options: -pthread -lz

GNU GMP GMPbench

GMPbench is a test of the GNU Multiple Precision Arithmetic (GMP) Library. GMPbench is a single-threaded integer benchmark that leverages the GMP library to stress the CPU with widening integer multiplication. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgGMPbench Score, More Is BetterGNU GMP GMPbench 6.2.1Total Time-O3 -march=native130026003900520065006171.81. (CC) gcc options: -O3 -march=native -lm

Zstd Compression

This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgMB/s, More Is BetterZstd Compression 1.5.0Compression Level: 3, Long Mode - Compression Speed-O1-O3 -march=native30060090012001500SE +/- 12.97, N = 3SE +/- 22.75, N = 151542.81451.0-O1-O3 -march=native1. (CC) gcc options: -pthread -lz

118 Results Shown

C-Ray
NCNN
LAME MP3 Encoding
NCNN:
  CPU-v2-v2 - mobilenet-v2
  CPU-v3-v3 - mobilenet-v3
  CPU - mobilenet
GraphicsMagick
NCNN
Crypto++
NCNN
Opus Codec Encoding
GraphicsMagick:
  Sharpen
  Resizing
Coremark
GraphicsMagick
Liquid-DSP:
  2 - 256 - 57
  8 - 256 - 57
  4 - 256 - 57
AOBench
NCNN:
  CPU - regnety_400m
  CPU - googlenet
Liquid-DSP
Botan:
  CAST-256
  CAST-256 - Decrypt
FLAC Audio Encoding
Crypto++
eSpeak-NG Speech Engine
ACES DGEMM
Smallpt
NCNN
Botan
Liquid-DSP
Botan
GraphicsMagick
Zstd Compression
SQLite Speedtest
CLOMP
Timed MrBayes Analysis
NCNN:
  CPU - shufflenet-v2
  CPU - squeezenet_ssd
Botan:
  Twofish - Decrypt
  AES-256
TNN
Botan
dav1d
Botan
NCNN
Crypto++
LAMMPS Molecular Dynamics Simulator
ASTC Encoder
libjpeg-turbo tjbench
Timed HMMer Search
LAMMPS Molecular Dynamics Simulator
Botan
Crypto++
SVT-VP9
TNN
SVT-VP9
NCNN
Botan
ASTC Encoder
Mobile Neural Network
ASTC Encoder
SVT-VP9
Quantum ESPRESSO
Zstd Compression
SVT-HEVC
PJSIP
Zstd Compression
Mobile Neural Network
SVT-HEVC
PostMark
Zstd Compression
SVT-HEVC
Zstd Compression:
  19 - Decompression Speed
  8 - Decompression Speed
Mobile Neural Network
x265
Mobile Neural Network
Zstd Compression
Redis
Zstd Compression
oneDNN
GraphicsMagick
Mobile Neural Network
PJSIP
Caffe
GraphicsMagick
oneDNN
Chia Blockchain VDF
oneDNN
NCNN
Zstd Compression
oneDNN
Kripke
Botan:
  ChaCha20Poly1305
  ChaCha20Poly1305 - Decrypt
PJSIP
Chia Blockchain VDF
Sysbench
WavPack Audio Encoding
Zstd Compression
oneDNN
Redis
NCNN
oneDNN
Basis Universal
oneDNN
Caffe
Basis Universal:
  UASTC Level 0
  UASTC Level 2
  UASTC Level 3
Stockfish
oneDNN
x265
Zstd Compression
GNU GMP GMPbench
Zstd Compression