AMD Ryzen 9 5950X 16-Core testing with a ASUS ROG CROSSHAIR VIII HERO (WI-FI) (3204 BIOS) and AMD NAVY_FLOUNDER 12GB on Ubuntu 20.10 via the Phoronix Test Suite.
GCC 10.2 Processor: AMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG CROSSHAIR VIII HERO (WI-FI) (3204 BIOS), Chipset: AMD Starship/Matisse, Memory: 32GB, Disk: 2000GB Corsair Force MP600 + 2000GB, Graphics: AMD NAVY_FLOUNDER 12GB (2855/1000MHz), Audio: AMD Device ab28, Monitor: ASUS MG28U, Network: Realtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200
OS: Ubuntu 20.10, Kernel: 5.11.6-051106-generic (x86_64), Desktop: GNOME Shell 3.38.2, Display Server: X Server 1.20.9, OpenGL: 4.6 Mesa 21.1.0-devel (git-684f97d 2021-03-12 groovy-oibaf-ppa) (LLVM 11.0.1), Vulkan: 1.2.168, Compiler: GCC 10.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa201009Python Notes: Python 3.8.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Sysbench This is a benchmark of Sysbench with the built-in CPU and memory sub-tests. Sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Events Per Second, More Is Better Sysbench 1.0.20 Test: CPU GCC 10.2 20K 40K 60K 80K 100K SE +/- 115.96, N = 3 91743.72 1. (CC) gcc options: -pthread -O2 -funroll-loops -O3 -march=native -rdynamic -ldl -laio -lm
OpenBenchmarking.org FPS, More Is Better dav1d 0.8.2 Video Input: Summer Nature 1080p GCC 10.2 200 400 600 800 1000 SE +/- 1.38, N = 3 971.79 MIN: 732.02 / MAX: 1055.82 1. (CC) gcc options: -O3 -march=native -pthread -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 2.1-rc Encoder Mode: Speed 4 Two-Pass GCC 10.2 3 6 9 12 15 SE +/- 0.02, N = 3 9.20 1. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 2.1-rc Encoder Mode: Speed 6 Realtime GCC 10.2 8 16 24 32 40 SE +/- 0.16, N = 3 35.13 1. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 2.1-rc Encoder Mode: Speed 6 Two-Pass GCC 10.2 7 14 21 28 35 SE +/- 0.26, N = 3 29.43 1. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 2.1-rc Encoder Mode: Speed 8 Realtime GCC 10.2 30 60 90 120 150 SE +/- 0.75, N = 3 121.13 1. (CXX) g++ options: -O3 -march=native -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
SVT-AV1 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 0.8 Encoder Mode: Enc Mode 4 - Input: 1080p GCC 10.2 2 4 6 8 10 SE +/- 0.014, N = 3 6.137 1. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 0.8 Encoder Mode: Enc Mode 8 - Input: 1080p GCC 10.2 12 24 36 48 60 SE +/- 0.24, N = 3 51.77 1. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.1 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p GCC 10.2 50 100 150 200 250 SE +/- 2.40, N = 12 235.04 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.1 Tuning: Visual Quality Optimized - Input: Bosphorus 1080p GCC 10.2 50 100 150 200 250 SE +/- 0.68, N = 3 228.96 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
x264 This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better x264 2019-12-17 H.264 Video Encoding GCC 10.2 50 100 150 200 250 SE +/- 1.66, N = 9 208.93 1. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -march=native -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize
OpenBenchmarking.org Frames Per Second, More Is Better x265 3.4 Video Input: Bosphorus 1080p GCC 10.2 20 40 60 80 100 SE +/- 0.19, N = 3 89.80 1. (CXX) g++ options: -O3 -march=native -rdynamic -lpthread -lrt -ldl -lnuma
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 0.8.2 Throughput Test: Kostya GCC 10.2 0.837 1.674 2.511 3.348 4.185 SE +/- 0.03, N = 3 3.72 1. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.org GB/s, More Is Better simdjson 0.8.2 Throughput Test: LargeRandom GCC 10.2 0.2745 0.549 0.8235 1.098 1.3725 SE +/- 0.01, N = 3 1.22 1. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.org GB/s, More Is Better simdjson 0.8.2 Throughput Test: PartialTweets GCC 10.2 1.269 2.538 3.807 5.076 6.345 SE +/- 0.05, N = 3 5.64 1. (CXX) g++ options: -O3 -march=native -pthread
OpenBenchmarking.org GB/s, More Is Better simdjson 0.8.2 Throughput Test: DistinctUserID GCC 10.2 1.2893 2.5786 3.8679 5.1572 6.4465 SE +/- 0.02, N = 3 5.73 1. (CXX) g++ options: -O3 -march=native -pthread
ONNX Runtime ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.6 Model: yolov4 - Device: OpenMP CPU GCC 10.2 90 180 270 360 450 SE +/- 1.96, N = 3 433 1. (CXX) g++ options: -O3 -march=native -fopenmp -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.6 Model: bertsquad-10 - Device: OpenMP CPU GCC 10.2 130 260 390 520 650 SE +/- 6.71, N = 3 614 1. (CXX) g++ options: -O3 -march=native -fopenmp -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.6 Model: fcn-resnet101-11 - Device: OpenMP CPU GCC 10.2 20 40 60 80 100 SE +/- 0.17, N = 3 99 1. (CXX) g++ options: -O3 -march=native -fopenmp -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.6 Model: shufflenet-v2-10 - Device: OpenMP CPU GCC 10.2 3K 6K 9K 12K 15K SE +/- 134.84, N = 3 15049 1. (CXX) g++ options: -O3 -march=native -fopenmp -ffunction-sections -fdata-sections -ldl -lrt
OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.6 Model: super-resolution-10 - Device: OpenMP CPU GCC 10.2 1400 2800 4200 5600 7000 SE +/- 215.50, N = 12 6721 1. (CXX) g++ options: -O3 -march=native -fopenmp -ffunction-sections -fdata-sections -ldl -lrt
GraphicsMagick This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Swirl GCC 10.2 300 600 900 1200 1500 SE +/- 3.67, N = 3 1166 1. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Rotate GCC 10.2 200 400 600 800 1000 SE +/- 3.51, N = 3 1056 1. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Sharpen GCC 10.2 80 160 240 320 400 SE +/- 1.00, N = 3 375 1. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Enhanced GCC 10.2 100 200 300 400 500 SE +/- 0.33, N = 3 439 1. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Resizing GCC 10.2 500 1000 1500 2000 2500 SE +/- 1.45, N = 3 2165 1. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Noise-Gaussian GCC 10.2 100 200 300 400 500 SE +/- 0.67, N = 3 454 1. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: HWB Color Space GCC 10.2 200 400 600 800 1000 SE +/- 1.33, N = 3 1115 1. (CC) gcc options: -fopenmp -O3 -march=native -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
Zstd Compression This test measures the time needed to compress/decompress a sample file (a FreeBSD disk image - FreeBSD-12.2-RELEASE-amd64-memstick.img) using Zstd compression with options for different compression levels / settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.4.9 Compression Level: 8 - Compression Speed GCC 10.2 200 400 600 800 1000 SE +/- 3.93, N = 3 1057.4 1. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.4.9 Compression Level: 19 - Decompression Speed GCC 10.2 900 1800 2700 3600 4500 SE +/- 6.53, N = 3 4251.7 1. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.4.9 Compression Level: 3, Long Mode - Compression Speed GCC 10.2 300 600 900 1200 1500 SE +/- 2.43, N = 3 1425.9 1. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.4.9 Compression Level: 3, Long Mode - Decompression Speed GCC 10.2 1000 2000 3000 4000 5000 SE +/- 46.74, N = 3 4737.1 1. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.4.9 Compression Level: 8, Long Mode - Compression Speed GCC 10.2 200 400 600 800 1000 SE +/- 2.15, N = 3 1122.6 1. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.4.9 Compression Level: 8, Long Mode - Decompression Speed GCC 10.2 1000 2000 3000 4000 5000 SE +/- 29.99, N = 3 4886.2 1. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.4.9 Compression Level: 19, Long Mode - Compression Speed GCC 10.2 8 16 24 32 40 SE +/- 0.03, N = 3 36.6 1. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
OpenBenchmarking.org MB/s, More Is Better Zstd Compression 1.4.9 Compression Level: 19, Long Mode - Decompression Speed GCC 10.2 900 1800 2700 3600 4500 SE +/- 72.38, N = 3 4350.9 1. (CC) gcc options: -O3 -march=native -pthread -lz -llzma
QuantLib QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.21 GCC 10.2 700 1400 2100 2800 3500 SE +/- 33.41, N = 5 3196.9 1. (CXX) g++ options: -O3 -march=native -rdynamic
JPEG XL The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: PNG - Encode Speed: 5 GCC 10.2 16 32 48 64 80 SE +/- 0.03, N = 3 74.12 1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: PNG - Encode Speed: 7 GCC 10.2 3 6 9 12 15 SE +/- 0.03, N = 3 11.20 1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: PNG - Encode Speed: 8 GCC 10.2 0.2565 0.513 0.7695 1.026 1.2825 SE +/- 0.00, N = 3 1.14 1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: JPEG - Encode Speed: 5 GCC 10.2 20 40 60 80 100 SE +/- 0.14, N = 3 87.35 1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: JPEG - Encode Speed: 7 GCC 10.2 20 40 60 80 100 SE +/- 0.19, N = 3 87.07 1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl
OpenBenchmarking.org MP/s, More Is Better JPEG XL 0.3.3 Input: JPEG - Encode Speed: 8 GCC 10.2 9 18 27 36 45 SE +/- 0.02, N = 3 38.13 1. (CXX) g++ options: -O3 -march=native -funwind-tables -O2 -pthread -fPIE -pie -ldl
JPEG XL Decoding The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding 0.3.3 CPU Threads: 1 GCC 10.2 13 26 39 52 65 SE +/- 0.05, N = 3 56.53
OpenBenchmarking.org Mpx/s, More Is Better Etcpak 0.7 Configuration: ETC1 GCC 10.2 80 160 240 320 400 SE +/- 0.37, N = 3 386.56 1. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.org Mpx/s, More Is Better Etcpak 0.7 Configuration: ETC2 GCC 10.2 50 100 150 200 250 SE +/- 1.65, N = 3 245.04 1. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
OpenBenchmarking.org Requests Per Second, More Is Better Redis 6.0.9 Test: SADD GCC 10.2 700K 1400K 2100K 2800K 3500K SE +/- 39730.96, N = 15 3041527.37 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.org Requests Per Second, More Is Better Redis 6.0.9 Test: LPUSH GCC 10.2 500K 1000K 1500K 2000K 2500K SE +/- 23396.73, N = 15 2222217.52 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.org Requests Per Second, More Is Better Redis 6.0.9 Test: GET GCC 10.2 700K 1400K 2100K 2800K 3500K SE +/- 36718.95, N = 15 3470419.90 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
OpenBenchmarking.org Requests Per Second, More Is Better Redis 6.0.9 Test: SET GCC 10.2 600K 1200K 1800K 2400K 3000K SE +/- 26145.63, N = 15 2640316.17 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3 -march=native
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 1 - Buffer Length: 256 - Filter Length: 57 GCC 10.2 20M 40M 60M 80M 100M SE +/- 828458.69, N = 5 81844000 1. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 16 - Buffer Length: 256 - Filter Length: 57 GCC 10.2 200M 400M 600M 800M 1000M SE +/- 5768882.04, N = 3 1111200000 1. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 2021.01.31 Threads: 32 - Buffer Length: 256 - Filter Length: 57 GCC 10.2 200M 400M 600M 800M 1000M SE +/- 497772.82, N = 3 1164966667 1. (CC) gcc options: -O3 -march=native -pthread -lm -lc -lliquid
Google SynthMark SynthMark is a cross platform tool for benchmarking CPU performance under a variety of real-time audio workloads. It uses a polyphonic synthesizer model to provide standardized tests for latency, jitter and computational throughput. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Voices, More Is Better Google SynthMark 20201109 Test: VoiceMark_100 GCC 10.2 200 400 600 800 1000 SE +/- 1.26, N = 3 966.30 1. (CXX) g++ options: -lm -lpthread -std=c++11 -Ofast
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100 GCC 10.2 0.3717 0.7434 1.1151 1.4868 1.8585 SE +/- 0.018, N = 4 1.652 1. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless GCC 10.2 4 8 12 16 20 SE +/- 0.11, N = 3 13.99 1. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Highest Compression GCC 10.2 1.1795 2.359 3.5385 4.718 5.8975 SE +/- 0.018, N = 3 5.242 1. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless, Highest Compression GCC 10.2 7 14 21 28 35 SE +/- 0.08, N = 3 28.81 1. (CC) gcc options: -fvisibility=hidden -O3 -march=native -pthread -lm -ljpeg -lpng16 -ltiff
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU GCC 10.2 0.891 1.782 2.673 3.564 4.455 SE +/- 0.00506, N = 3 3.95979 MIN: 3.76 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU GCC 10.2 3 6 9 12 15 SE +/- 0.01340, N = 3 9.25967 MIN: 9.1 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU GCC 10.2 4 8 12 16 20 SE +/- 0.09, N = 3 17.29 MIN: 16.58 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU GCC 10.2 1.0052 2.0104 3.0156 4.0208 5.026 SE +/- 0.30276, N = 15 4.46777 MIN: 2.86 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU GCC 10.2 0.7998 1.5996 2.3994 3.1992 3.999 SE +/- 0.00753, N = 3 3.55467 MIN: 3.46 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU GCC 10.2 600 1200 1800 2400 3000 SE +/- 2.01, N = 3 2757.52 MIN: 2719.35 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU GCC 10.2 400 800 1200 1600 2000 SE +/- 5.00, N = 3 1773.67 MIN: 1750.26 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.1.2 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU GCC 10.2 0.1437 0.2874 0.4311 0.5748 0.7185 SE +/- 0.000722, N = 3 0.638664 MIN: 0.61 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.1.3 Model: SqueezeNetV1.0 GCC 10.2 1.1432 2.2864 3.4296 4.5728 5.716 SE +/- 0.010, N = 3 5.081 MIN: 4.92 / MAX: 14.74 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.1.3 Model: resnet-v2-50 GCC 10.2 6 12 18 24 30 SE +/- 0.02, N = 3 25.07 MIN: 23.97 / MAX: 39.95 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.1.3 Model: MobileNetV2_224 GCC 10.2 0.729 1.458 2.187 2.916 3.645 SE +/- 0.049, N = 3 3.240 MIN: 3.12 / MAX: 11.31 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.1.3 Model: mobilenet-v1-1.0 GCC 10.2 0.529 1.058 1.587 2.116 2.645 SE +/- 0.027, N = 3 2.351 MIN: 2.27 / MAX: 7.49 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 1.1.3 Model: inception-v3 GCC 10.2 8 16 24 32 40 SE +/- 0.09, N = 3 32.34 MIN: 31.33 / MAX: 42.61 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU-v2-v2 - Model: mobilenet-v2 GCC 10.2 0.9968 1.9936 2.9904 3.9872 4.984 SE +/- 0.01, N = 15 4.43 MIN: 4.19 / MAX: 11.09 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU-v3-v3 - Model: mobilenet-v3 GCC 10.2 0.8663 1.7326 2.5989 3.4652 4.3315 SE +/- 0.02, N = 15 3.85 MIN: 3.74 / MAX: 10.85 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: shufflenet-v2 GCC 10.2 0.9518 1.9036 2.8554 3.8072 4.759 SE +/- 0.01, N = 15 4.23 MIN: 4.15 / MAX: 9.05 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: mnasnet GCC 10.2 0.8843 1.7686 2.6529 3.5372 4.4215 SE +/- 0.02, N = 15 3.93 MIN: 3.71 / MAX: 6.06 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: efficientnet-b0 GCC 10.2 1.197 2.394 3.591 4.788 5.985 SE +/- 0.02, N = 15 5.32 MIN: 5.15 / MAX: 13.83 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: blazeface GCC 10.2 0.4118 0.8236 1.2354 1.6472 2.059 SE +/- 0.01, N = 15 1.83 MIN: 1.77 / MAX: 3.9 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: googlenet GCC 10.2 3 6 9 12 15 SE +/- 0.06, N = 15 12.76 MIN: 12.19 / MAX: 19.36 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: vgg16 GCC 10.2 13 26 39 52 65 SE +/- 0.12, N = 15 57.89 MIN: 55.89 / MAX: 80.86 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: resnet18 GCC 10.2 4 8 12 16 20 SE +/- 0.05, N = 15 14.11 MIN: 13.84 / MAX: 23.15 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: alexnet GCC 10.2 3 6 9 12 15 SE +/- 0.09, N = 15 10.82 MIN: 10.41 / MAX: 17.59 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: resnet50 GCC 10.2 6 12 18 24 30 SE +/- 0.21, N = 15 25.67 MIN: 24.52 / MAX: 35.96 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: yolov4-tiny GCC 10.2 5 10 15 20 25 SE +/- 0.17, N = 15 20.77 MIN: 19.69 / MAX: 43.19 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: squeezenet_ssd GCC 10.2 4 8 12 16 20 SE +/- 0.06, N = 15 13.77 MIN: 13.25 / MAX: 23.45 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: regnety_400m GCC 10.2 4 8 12 16 20 SE +/- 0.06, N = 15 17.61 MIN: 16.94 / MAX: 25.97 1. (CXX) g++ options: -O3 -march=native -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better TNN 0.2.3 Target: CPU - Model: SqueezeNet v1.1 GCC 10.2 50 100 150 200 250 SE +/- 0.57, N = 3 211.57 MIN: 206.88 / MAX: 212.83 1. (CXX) g++ options: -O3 -march=native -fopenmp -pthread -fvisibility=hidden -rdynamic -ldl
Timed MrBayes Analysis This test performs a bayesian analysis of a set of primate genome sequences in order to estimate their phylogeny. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed MrBayes Analysis 3.2.7 Primate Phylogeny Analysis GCC 10.2 13 26 39 52 65 SE +/- 0.15, N = 3 59.87 1. (CC) gcc options: -mmmx -msse -msse2 -msse3 -mssse3 -msse4.1 -msse4.2 -msse4a -msha -maes -mavx -mfma -mavx2 -mrdrnd -mbmi -mbmi2 -madx -mabm -O3 -std=c99 -pedantic -march=native -lm -lreadline
OpenFOAM OpenFOAM is the leading free, open source software for computational fluid dynamics (CFD). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 8 Input: Motorbike 30M GCC 10.2 20 40 60 80 100 SE +/- 0.08, N = 3 97.75 1. (CXX) g++ options: -std=c++11 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
C-Ray This is a test of C-Ray, a simple raytracer designed to test the floating-point CPU performance. This test is multi-threaded (16 threads per core), will shoot 8 rays per pixel for anti-aliasing, and will generate a 1600 x 1200 image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better C-Ray 1.1 Total Time - 4K, 16 Rays Per Pixel GCC 10.2 6 12 18 24 30 SE +/- 0.07, N = 3 25.09 1. (CC) gcc options: -lm -lpthread -O3 -march=native
POV-Ray This is a test of POV-Ray, the Persistence of Vision Raytracer. POV-Ray is used to create 3D graphics using ray-tracing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better POV-Ray 3.7.0.7 Trace Time GCC 10.2 6 12 18 24 30 SE +/- 0.09, N = 3 24.09 1. (CXX) g++ options: -pipe -O3 -ffast-math -march=native -pthread -lSDL -lXpm -lSM -lICE -lX11 -lIlmImf -lIlmImf-2_5 -lImath-2_5 -lHalf-2_5 -lIex-2_5 -lIexMath-2_5 -lIlmThread-2_5 -lIlmThread -ltiff -ljpeg -lpng -lz -lrt -lm -lboost_thread -lboost_system
Smallpt Smallpt is a C++ global illumination renderer written in less than 100 lines of code. Global illumination is done via unbiased Monte Carlo path tracing and there is multi-threading support via the OpenMP library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Smallpt 1.0 Global Illumination Renderer; 128 Samples GCC 10.2 1.0517 2.1034 3.1551 4.2068 5.2585 SE +/- 0.015, N = 3 4.674 1. (CXX) g++ options: -fopenmp -O3 -march=native
Opus Codec Encoding Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Opus Codec Encoding 1.3.1 WAV To Opus Encode GCC 10.2 1.2339 2.4678 3.7017 4.9356 6.1695 SE +/- 0.031, N = 5 5.484 1. (CXX) g++ options: -O3 -march=native -fvisibility=hidden -logg -lm
Gcrypt Library Libgcrypt is a general purpose cryptographic library developed as part of the GnuPG project. This is a benchmark of libgcrypt's integrated benchmark and is measuring the time to run the benchmark command with a cipher/mac/hash repetition count set for 50 times as simple, high level look at the overall crypto performance of the system under test. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Gcrypt Library 1.9 GCC 10.2 40 80 120 160 200 SE +/- 0.29, N = 3 171.19 1. (CC) gcc options: -O3 -march=native -fvisibility=hidden -lgpg-error
Ngspice Ngspice is an open-source SPICE circuit simulator. Ngspice was originally based on the Berkeley SPICE electronic circuit simulator. Ngspice supports basic threading using OpenMP. This test profile is making use of the ISCAS 85 benchmark circuits. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Ngspice 34 Circuit: C2670 GCC 10.2 16 32 48 64 80 SE +/- 0.21, N = 3 71.60 1. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
OpenBenchmarking.org Seconds, Fewer Is Better Ngspice 34 Circuit: C7552 GCC 10.2 14 28 42 56 70 SE +/- 0.15, N = 3 62.82 1. (CC) gcc options: -O3 -march=native -fopenmp -lm -lstdc++ -lfftw3 -lXaw -lXmu -lXt -lXext -lX11 -lSM -lICE
RNNoise RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better RNNoise 2020-06-28 GCC 10.2 4 8 12 16 20 SE +/- 0.04, N = 3 14.20 1. (CC) gcc options: -O3 -march=native -pedantic -fvisibility=hidden
WebP2 Image Encode This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20210126 Encode Settings: Default GCC 10.2 0.5117 1.0234 1.5351 2.0468 2.5585 SE +/- 0.005, N = 3 2.274 1. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20210126 Encode Settings: Quality 75, Compression Effort 7 GCC 10.2 30 60 90 120 150 SE +/- 1.06, N = 3 111.80 1. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20210126 Encode Settings: Quality 95, Compression Effort 7 GCC 10.2 40 80 120 160 200 SE +/- 0.04, N = 3 203.81 1. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20210126 Encode Settings: Quality 100, Compression Effort 5 GCC 10.2 2 4 6 8 10 SE +/- 0.011, N = 3 6.414 1. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20210126 Encode Settings: Quality 100, Lossless Compression GCC 10.2 80 160 240 320 400 SE +/- 0.42, N = 3 367.37 1. (CXX) g++ options: -O3 -march=native -fno-rtti -rdynamic -lpthread -ljpeg -lgif -lwebp -lwebpdemux
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.4 Preset: Medium GCC 10.2 0.9118 1.8236 2.7354 3.6472 4.559 SE +/- 0.0178, N = 3 4.0524 1. (CXX) g++ options: -O3 -march=native -flto -pthread
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.13 Settings: UASTC Level 0 GCC 10.2 1.1603 2.3206 3.4809 4.6412 5.8015 SE +/- 0.023, N = 3 5.157 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.13 Settings: UASTC Level 2 GCC 10.2 4 8 12 16 20 SE +/- 0.05, N = 3 15.90 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.13 Settings: UASTC Level 3 GCC 10.2 7 14 21 28 35 SE +/- 0.04, N = 3 28.13 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
GCC 10.2 Processor: AMD Ryzen 9 5950X 16-Core @ 3.40GHz (16 Cores / 32 Threads), Motherboard: ASUS ROG CROSSHAIR VIII HERO (WI-FI) (3204 BIOS), Chipset: AMD Starship/Matisse, Memory: 32GB, Disk: 2000GB Corsair Force MP600 + 2000GB, Graphics: AMD NAVY_FLOUNDER 12GB (2855/1000MHz), Audio: AMD Device ab28, Monitor: ASUS MG28U, Network: Realtek RTL8125 2.5GbE + Intel I211 + Intel Wi-Fi 6 AX200
OS: Ubuntu 20.10, Kernel: 5.11.6-051106-generic (x86_64), Desktop: GNOME Shell 3.38.2, Display Server: X Server 1.20.9, OpenGL: 4.6 Mesa 21.1.0-devel (git-684f97d 2021-03-12 groovy-oibaf-ppa) (LLVM 11.0.1), Vulkan: 1.2.168, Compiler: GCC 10.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseEnvironment Notes: CXXFLAGS="-O3 -march=native" CFLAGS="-O3 -march=native"Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa201009Python Notes: Python 3.8.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 14 March 2021 08:46 by user pts.