AMD FX-8370 Eight-Core testing with a MSI 970 GAMING (MS-7693) v4.0 (V22.3 BIOS) and AMD Radeon HD 5770 1GB on Ubuntu 20.10 via the Phoronix Test Suite.
Vet 1 Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x6000852Graphics Notes: GLAMORPython Notes: Python 3.8.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Vet 2 Processor: AMD FX-8370 Eight-Core @ 4.00GHz (4 Cores / 8 Threads), Motherboard: MSI 970 GAMING (MS-7693) v4.0 (V22.3 BIOS), Chipset: AMD RD9x0/RX980, Memory: 8GB, Disk: 120GB TOSHIBA TR150, Graphics: AMD Radeon HD 5770 1GB, Audio: Realtek ALC1150, Monitor: G237HL, Network: Qualcomm Atheros Killer E220x
OS: Ubuntu 20.10, Kernel: 5.8.0-33-generic (x86_64), Desktop: GNOME Shell 3.38.1, Display Server: X Server 1.20.9, Display Driver: modesetting 1.20.9, OpenGL: 3.3 Mesa 20.2.1 (LLVM 11.0.0), Compiler: GCC 10.2.0, File-System: ext4, Screen Resolution: 1920x1080
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: NUMA Vet 2 Vet 1 20 40 60 80 100 SE +/- 0.62, N = 3 SE +/- 0.28, N = 3 84.08 84.19 1. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: MEMFD Vet 2 Vet 1 30 60 90 120 150 SE +/- 0.92, N = 3 SE +/- 0.95, N = 3 126.33 127.99 1. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Atomic Vet 2 Vet 1 12K 24K 36K 48K 60K SE +/- 114.37, N = 3 SE +/- 32.93, N = 3 54711.61 54519.87 1. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Crypto Vet 2 Vet 1 200 400 600 800 1000 SE +/- 0.06, N = 3 SE +/- 1.19, N = 3 869.13 872.89 1. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Malloc Vet 2 Vet 1 5M 10M 15M 20M 25M SE +/- 274937.61, N = 3 SE +/- 161586.44, N = 3 21949359.53 22154552.21 1. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Forking Vet 2 Vet 1 4K 8K 12K 16K 20K SE +/- 143.47, N = 3 SE +/- 48.99, N = 3 18535.23 18452.92 1. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: SENDFILE Vet 2 Vet 1 11K 22K 33K 44K 55K SE +/- 19.61, N = 3 SE +/- 40.91, N = 3 50818.29 50924.75 1. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: CPU Cache Vet 2 Vet 1 4 8 12 16 20 SE +/- 0.17, N = 15 SE +/- 0.30, N = 15 15.31 14.76 1. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: CPU Stress Vet 2 Vet 1 400 800 1200 1600 2000 SE +/- 16.57, N = 3 SE +/- 6.19, N = 3 1831.98 1823.75 1. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Semaphores Vet 2 Vet 1 110K 220K 330K 440K 550K SE +/- 151.49, N = 3 SE +/- 6.77, N = 3 505851.94 505610.19 1. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Matrix Math Vet 2 Vet 1 3K 6K 9K 12K 15K SE +/- 2.24, N = 3 SE +/- 4.41, N = 3 15086.29 15073.34 1. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Vector Math Vet 2 Vet 1 5K 10K 15K 20K 25K SE +/- 6.13, N = 3 SE +/- 9.23, N = 3 23915.82 23898.11 1. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Memory Copying Vet 2 Vet 1 200 400 600 800 1000 SE +/- 2.16, N = 3 SE +/- 2.90, N = 3 1003.73 1002.06 1. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Socket Activity Vet 2 Vet 1 400 800 1200 1600 2000 SE +/- 6.82, N = 3 SE +/- 7.16, N = 3 1783.24 1776.16 1. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Context Switching Vet 2 Vet 1 200K 400K 600K 800K 1000K SE +/- 6076.39, N = 3 SE +/- 9262.61, N = 3 917908.08 958188.67 1. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Glibc C String Functions Vet 2 Vet 1 60K 120K 180K 240K 300K SE +/- 439.83, N = 3 SE +/- 202.00, N = 3 266600.95 267246.20 1. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Glibc Qsort Data Sorting Vet 2 Vet 1 12 24 36 48 60 SE +/- 0.11, N = 3 SE +/- 0.40, N = 3 55.53 55.09 1. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: System V Message Passing Vet 2 Vet 1 500K 1000K 1500K 2000K 2500K SE +/- 6160.98, N = 3 SE +/- 5483.77, N = 3 2206047.98 2212153.79 1. (CC) gcc options: -O2 -std=gnu99 -lm -lcrypt -lrt -lz -ldl -lpthread -lc
Kvazaar This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.0 Video Input: Bosphorus 4K - Video Preset: Medium Vet 2 Vet 1 0.1013 0.2026 0.3039 0.4052 0.5065 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.45 0.45 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.0 Video Input: Bosphorus 1080p - Video Preset: Medium Vet 2 Vet 1 0.4703 0.9406 1.4109 1.8812 2.3515 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 2.09 2.09 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.0 Video Input: Bosphorus 4K - Video Preset: Very Fast Vet 2 Vet 1 0.3038 0.6076 0.9114 1.2152 1.519 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 1.35 1.35 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.0 Video Input: Bosphorus 4K - Video Preset: Ultra Fast Vet 2 Vet 1 0.6705 1.341 2.0115 2.682 3.3525 SE +/- 0.00, N = 3 SE +/- 0.04, N = 4 2.98 2.95 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.0 Video Input: Bosphorus 1080p - Video Preset: Very Fast Vet 2 Vet 1 1.2488 2.4976 3.7464 4.9952 6.244 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 5.55 5.55 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.0 Video Input: Bosphorus 1080p - Video Preset: Ultra Fast Vet 2 Vet 1 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 11.85 11.84 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
x264 This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better x264 2019-12-17 H.264 Video Encoding Vet 2 Vet 1 7 14 21 28 35 SE +/- 0.27, N = 7 SE +/- 0.22, N = 11 30.21 30.29 1. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize
x265 This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better x265 3.4 Video Input: Bosphorus 4K Vet 2 Vet 1 1.0238 2.0476 3.0714 4.0952 5.119 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 4.55 4.55 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Frames Per Second, More Is Better x265 3.4 Video Input: Bosphorus 1080p Vet 2 Vet 1 5 10 15 20 25 SE +/- 0.08, N = 3 SE +/- 0.04, N = 3 19.32 19.52 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 0.7.1 Throughput Test: Kostya Vet 2 Vet 1 0.0675 0.135 0.2025 0.27 0.3375 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.30 0.29 1. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.org GB/s, More Is Better simdjson 0.7.1 Throughput Test: LargeRandom Vet 2 Vet 1 0.054 0.108 0.162 0.216 0.27 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.24 0.24 1. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.org GB/s, More Is Better simdjson 0.7.1 Throughput Test: PartialTweets Vet 2 Vet 1 0.0833 0.1666 0.2499 0.3332 0.4165 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.37 0.37 1. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.org GB/s, More Is Better simdjson 0.7.1 Throughput Test: DistinctUserID Vet 2 Vet 1 0.0855 0.171 0.2565 0.342 0.4275 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.38 0.38 1. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.org Iterations Per Second, More Is Better Cryptsetup PBKDF2-whirlpool Vet 2 Vet 1 80K 160K 240K 320K 400K SE +/- 1763.06, N = 3 SE +/- 1238.75, N = 3 394414 395398
OpenBenchmarking.org M samples/s, More Is Better IndigoBench 4.4 Acceleration: CPU - Scene: Supercar Vet 2 Vet 1 0.2723 0.5446 0.8169 1.0892 1.3615 SE +/- 0.007, N = 3 SE +/- 0.003, N = 3 1.207 1.210
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.3 Scene: Rainbow Colors and Prism Vet 2 Vet 1 0.117 0.234 0.351 0.468 0.585 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 0.51 0.52 MIN: 0.49 / MAX: 0.58 MIN: 0.49 / MAX: 0.58
Stockfish This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 12 Total Time Vet 2 Vet 1 1.5M 3M 4.5M 6M 7.5M SE +/- 53835.86, N = 13 SE +/- 92057.35, N = 3 6814594 6971484 1. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.org Requests Per Second, More Is Better Redis 6.0.9 Test: SADD Vet 2 Vet 1 300K 600K 900K 1200K 1500K SE +/- 10488.62, N = 3 SE +/- 10820.63, N = 3 1205601.54 1212043.04 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 6.0.9 Test: LPUSH Vet 2 Vet 1 200K 400K 600K 800K 1000K SE +/- 9548.09, N = 4 SE +/- 8135.45, N = 3 796182.64 806438.96 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 6.0.9 Test: GET Vet 2 Vet 1 300K 600K 900K 1200K 1500K SE +/- 16057.72, N = 3 SE +/- 7264.81, N = 3 1291856.21 1420635.00 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 6.0.9 Test: SET Vet 2 Vet 1 200K 400K 600K 800K 1000K SE +/- 11513.44, N = 3 SE +/- 6967.52, N = 3 1106164.79 1077335.42 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
Node.js V8 Web Tooling Benchmark Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org runs/s, More Is Better Node.js V8 Web Tooling Benchmark Vet 2 Vet 1 1.1948 2.3896 3.5844 4.7792 5.974 SE +/- 0.07, N = 3 SE +/- 0.02, N = 3 5.15 5.31 1. Nodejs
v12.18.2
PHPBench PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Score, More Is Better PHPBench 0.8.1 PHP Benchmark Suite Vet 2 Vet 1 90K 180K 270K 360K 450K SE +/- 1456.68, N = 3 SE +/- 4041.48, N = 12 420714 415846
CLOMP CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Speedup, More Is Better CLOMP 1.2 Static OMP Speedup Vet 2 Vet 1 0.675 1.35 2.025 2.7 3.375 SE +/- 0.03, N = 12 SE +/- 0.03, N = 3 3.0 2.8 1. (CC) gcc options: -fopenmp -O3 -lm
InfluxDB This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 Vet 2 Vet 1 130K 260K 390K 520K 650K SE +/- 8990.08, N = 12 SE +/- 4437.95, N = 3 579620.2 594226.4
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 Vet 2 Vet 1 150K 300K 450K 600K 750K SE +/- 7227.41, N = 5 SE +/- 7598.50, N = 5 707886.5 721348.2
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Default Vet 2 Vet 1 0.5499 1.0998 1.6497 2.1996 2.7495 SE +/- 0.006, N = 3 SE +/- 0.005, N = 3 2.433 2.444 1. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100 Vet 2 Vet 1 0.8039 1.6078 2.4117 3.2156 4.0195 SE +/- 0.008, N = 3 SE +/- 0.010, N = 3 3.554 3.573 1. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless Vet 2 Vet 1 7 14 21 28 35 SE +/- 0.18, N = 3 SE +/- 0.04, N = 3 30.03 30.65 1. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Highest Compression Vet 2 Vet 1 3 6 9 12 15 SE +/- 0.04, N = 3 SE +/- 0.05, N = 3 11.40 11.41 1. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless, Highest Compression Vet 2 Vet 1 15 30 45 60 75 SE +/- 0.06, N = 3 SE +/- 0.20, N = 3 66.78 66.56 1. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
Caffe This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 100 Vet 2 Vet 1 15K 30K 45K 60K 75K SE +/- 73.54, N = 3 SE +/- 204.80, N = 3 70801 71021 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 100 Vet 2 Vet 1 40K 80K 120K 160K 200K SE +/- 98.17, N = 3 SE +/- 126.85, N = 3 178137 177755 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU Vet 2 Vet 1 8 16 24 32 40 SE +/- 0.08, N = 3 SE +/- 0.06, N = 3 32.93 32.62 MIN: 31.78 MIN: 31.6 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU Vet 2 Vet 1 6 12 18 24 30 SE +/- 0.08, N = 3 SE +/- 0.13, N = 3 24.68 24.07 MIN: 23.97 MIN: 23.42 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU Vet 2 Vet 1 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 10.20 10.22 MIN: 9.52 MIN: 9.53 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU Vet 2 Vet 1 1.334 2.668 4.002 5.336 6.67 SE +/- 0.01645, N = 3 SE +/- 0.00798, N = 3 5.92877 5.74220 MIN: 5.41 MIN: 5.23 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU Vet 2 Vet 1 11 22 33 44 55 SE +/- 0.06, N = 3 SE +/- 0.07, N = 3 49.78 48.15 MIN: 47.68 MIN: 46.47 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU Vet 2 Vet 1 16 32 48 64 80 SE +/- 0.09, N = 3 SE +/- 0.12, N = 3 71.13 71.20 MIN: 68.74 MIN: 68.92 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU Vet 2 Vet 1 30 60 90 120 150 SE +/- 0.24, N = 3 SE +/- 0.07, N = 3 114.16 113.86 MIN: 111.81 MIN: 111.71 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU Vet 2 Vet 1 9 18 27 36 45 SE +/- 0.14, N = 3 SE +/- 0.11, N = 3 38.23 38.91 MIN: 35.84 MIN: 36.73 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU Vet 2 Vet 1 5 10 15 20 25 SE +/- 0.03, N = 3 SE +/- 0.07, N = 3 21.53 21.63 MIN: 20.01 MIN: 19.96 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU Vet 2 Vet 1 5 10 15 20 25 SE +/- 0.07, N = 3 SE +/- 0.02, N = 3 22.20 22.21 MIN: 21.45 MIN: 21.47 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU Vet 2 Vet 1 10K 20K 30K 40K 50K SE +/- 6.26, N = 3 SE +/- 11.37, N = 3 44788.0 44739.8 MIN: 44662.9 MIN: 44603 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU Vet 2 Vet 1 5K 10K 15K 20K 25K SE +/- 1.32, N = 3 SE +/- 11.99, N = 3 22775.3 22758.6 MIN: 22676.4 MIN: 22625.2 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU Vet 2 Vet 1 10K 20K 30K 40K 50K SE +/- 15.69, N = 3 SE +/- 4.04, N = 3 44784.5 44745.0 MIN: 44650.3 MIN: 44611.3 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU Vet 2 Vet 1 5K 10K 15K 20K 25K SE +/- 15.06, N = 3 SE +/- 7.47, N = 3 22789.7 22769.0 MIN: 22662.3 MIN: 22649.1 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU Vet 2 Vet 1 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 13.60 13.59 MIN: 12.65 MIN: 12.65 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU Vet 2 Vet 1 10K 20K 30K 40K 50K SE +/- 2.68, N = 3 SE +/- 13.80, N = 3 44784.2 44728.9 MIN: 44656.6 MIN: 44551.4 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU Vet 2 Vet 1 5K 10K 15K 20K 25K SE +/- 1.49, N = 3 SE +/- 10.92, N = 3 22787.9 22771.8 MIN: 22661.6 MIN: 22638.3 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU Vet 2 Vet 1 3 6 9 12 15 SE +/- 0.03, N = 3 SE +/- 0.05, N = 3 11.76 11.75 MIN: 10.37 MIN: 10.39 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: SqueezeNetV1.0 Vet 2 Vet 1 40 80 120 160 200 SE +/- 0.77, N = 3 SE +/- 2.02, N = 3 157.85 159.81 MIN: 151.35 / MAX: 262.21 MIN: 150.79 / MAX: 263.93 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: resnet-v2-50 Vet 2 Vet 1 200 400 600 800 1000 SE +/- 2.75, N = 3 SE +/- 2.09, N = 3 1128.31 1131.68 MIN: 1107.01 / MAX: 1197.55 MIN: 1108.54 / MAX: 1212.22 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: MobileNetV2_224 Vet 2 Vet 1 20 40 60 80 100 SE +/- 0.84, N = 3 SE +/- 0.30, N = 3 85.55 84.92 MIN: 81.09 / MAX: 142.92 MIN: 81.76 / MAX: 138.82 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: mobilenet-v1-1.0 Vet 2 Vet 1 40 80 120 160 200 SE +/- 0.50, N = 3 SE +/- 0.45, N = 3 168.89 169.00 MIN: 164.85 / MAX: 289.14 MIN: 165.04 / MAX: 249.99 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: inception-v3 Vet 2 Vet 1 300 600 900 1200 1500 SE +/- 2.29, N = 3 SE +/- 16.11, N = 3 1325.20 1344.89 MIN: 1283.55 / MAX: 1801.49 MIN: 1285.6 / MAX: 1579.75 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: mobilenet Vet 2 Vet 1 20 40 60 80 100 SE +/- 0.21, N = 3 SE +/- 0.24, N = 3 97.78 98.29 MIN: 94.04 / MAX: 112.57 MIN: 93.92 / MAX: 120.36 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU-v2-v2 - Model: mobilenet-v2 Vet 2 Vet 1 6 12 18 24 30 SE +/- 0.09, N = 3 SE +/- 0.09, N = 3 24.80 25.05 MIN: 23.05 / MAX: 35.19 MIN: 22.78 / MAX: 36.55 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU-v3-v3 - Model: mobilenet-v3 Vet 2 Vet 1 6 12 18 24 30 SE +/- 0.04, N = 3 SE +/- 0.06, N = 3 23.10 23.00 MIN: 21.25 / MAX: 39.1 MIN: 21.42 / MAX: 41.88 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: shufflenet-v2 Vet 2 Vet 1 5 10 15 20 25 SE +/- 0.15, N = 3 SE +/- 0.10, N = 3 18.33 18.80 MIN: 16.43 / MAX: 32.5 MIN: 16.63 / MAX: 34.89 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: mnasnet Vet 2 Vet 1 6 12 18 24 30 SE +/- 0.08, N = 3 SE +/- 0.04, N = 3 24.71 24.69 MIN: 22.88 / MAX: 34 MIN: 22.88 / MAX: 37.47 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: efficientnet-b0 Vet 2 Vet 1 9 18 27 36 45 SE +/- 0.20, N = 3 SE +/- 0.14, N = 3 39.93 39.88 MIN: 37.28 / MAX: 54.83 MIN: 37.27 / MAX: 56.04 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: blazeface Vet 2 Vet 1 1.0238 2.0476 3.0714 4.0952 5.119 SE +/- 0.05, N = 3 SE +/- 0.03, N = 3 4.43 4.55 MIN: 3.96 / MAX: 22.46 MIN: 3.95 / MAX: 18.03 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: googlenet Vet 2 Vet 1 20 40 60 80 100 SE +/- 0.16, N = 3 SE +/- 0.10, N = 3 99.69 99.74 MIN: 95.68 / MAX: 117.36 MIN: 95.23 / MAX: 114.59 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: vgg16 Vet 2 Vet 1 170 340 510 680 850 SE +/- 0.31, N = 3 SE +/- 0.31, N = 3 767.18 767.38 MIN: 749.76 / MAX: 809.52 MIN: 748.84 / MAX: 812.65 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: resnet18 Vet 2 Vet 1 20 40 60 80 100 SE +/- 0.03, N = 3 SE +/- 0.16, N = 3 91.27 91.23 MIN: 88.86 / MAX: 108.88 MIN: 88.35 / MAX: 106.84 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: alexnet Vet 2 Vet 1 11 22 33 44 55 SE +/- 0.09, N = 3 SE +/- 0.08, N = 3 49.99 49.72 MIN: 47.21 / MAX: 77.98 MIN: 47.15 / MAX: 75.12 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: resnet50 Vet 2 Vet 1 50 100 150 200 250 SE +/- 0.29, N = 3 SE +/- 0.15, N = 3 211.65 211.76 MIN: 206.15 / MAX: 234.2 MIN: 205.66 / MAX: 235.93 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: yolov4-tiny Vet 2 Vet 1 40 80 120 160 200 SE +/- 0.27, N = 3 SE +/- 0.34, N = 3 202.21 201.70 MIN: 197.31 / MAX: 217.34 MIN: 197.09 / MAX: 225.22 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: squeezenet_ssd Vet 2 Vet 1 20 40 60 80 100 SE +/- 0.21, N = 3 SE +/- 0.20, N = 3 103.84 103.79 MIN: 96.54 / MAX: 140.32 MIN: 97.27 / MAX: 120.35 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: regnety_400m Vet 2 Vet 1 9 18 27 36 45 SE +/- 0.12, N = 3 SE +/- 0.31, N = 3 39.99 40.24 MIN: 36.36 / MAX: 56.98 MIN: 36.63 / MAX: 98.56 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.2.3 Target: CPU - Model: MobileNet v2 Vet 2 Vet 1 110 220 330 440 550 SE +/- 0.14, N = 3 SE +/- 0.75, N = 3 528.55 528.35 MIN: 518.27 / MAX: 548.78 MIN: 519.59 / MAX: 546.48 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.org ms, Fewer Is Better TNN 0.2.3 Target: CPU - Model: SqueezeNet v1.1 Vet 2 Vet 1 110 220 330 440 550 SE +/- 0.39, N = 3 SE +/- 0.29, N = 3 513.59 513.82 MIN: 509.34 / MAX: 530.2 MIN: 510.88 / MAX: 517.71 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
YafaRay YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better YafaRay 3.4.1 Total Time For Sample Scene Vet 2 Vet 1 100 200 300 400 500 SE +/- 1.55, N = 3 SE +/- 0.95, N = 3 437.99 439.47 1. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread
DeepSpeech Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better DeepSpeech 0.6 Acceleration: CPU Vet 2 Vet 1 60 120 180 240 300 SE +/- 1.12, N = 3 SE +/- 1.20, N = 3 286.13 287.01
Opus Codec Encoding Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Opus Codec Encoding 1.3.1 WAV To Opus Encode Vet 2 Vet 1 4 8 12 16 20 SE +/- 0.03, N = 5 SE +/- 0.03, N = 5 15.74 15.71 1. (CXX) g++ options: -fvisibility=hidden -logg -lm
RNNoise RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better RNNoise 2020-06-28 Vet 2 Vet 1 11 22 33 44 55 SE +/- 0.62, N = 3 SE +/- 0.65, N = 3 47.54 47.77 1. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
Basis Universal Basis Universal is a GPU texture codoec. This test times how long it takes to convert sRGB PNGs into Basis Univeral assets with various settings. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.12 Settings: ETC1S Vet 2 Vet 1 20 40 60 80 100 SE +/- 0.39, N = 3 SE +/- 0.16, N = 3 104.33 104.18 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.12 Settings: UASTC Level 0 Vet 2 Vet 1 4 8 12 16 20 SE +/- 0.03, N = 3 SE +/- 0.03, N = 3 14.89 14.90 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.12 Settings: UASTC Level 2 Vet 2 Vet 1 20 40 60 80 100 SE +/- 0.04, N = 3 SE +/- 0.13, N = 3 88.01 88.18 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.12 Settings: UASTC Level 3 Vet 2 Vet 1 40 80 120 160 200 SE +/- 0.10, N = 3 SE +/- 3.41, N = 9 169.52 178.49 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.org Seconds, Fewer Is Better Basis Universal 1.12 Settings: UASTC Level 2 + RDO Post-Processing Vet 2 Vet 1 200 400 600 800 1000 SE +/- 7.10, N = 3 SE +/- 9.83, N = 9 1118.93 1121.06 1. (CXX) g++ options: -std=c++11 -fvisibility=hidden -fPIC -fno-strict-aliasing -O3 -rdynamic -lm -lpthread
OpenBenchmarking.org Seconds, Fewer Is Better Darktable 3.2.1 Test: Server Rack - Acceleration: CPU-only Vet 2 Vet 1 0.1132 0.2264 0.3396 0.4528 0.566 SE +/- 0.001, N = 3 SE +/- 0.001, N = 3 0.503 0.491
Hugin Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Hugin Panorama Photo Assistant + Stitching Time Vet 2 Vet 1 20 40 60 80 100 SE +/- 1.14, N = 3 SE +/- 0.45, N = 3 105.65 105.81
OCRMyPDF OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OCRMyPDF 10.3.1+dfsg Processing 60 Page PDF Document Vet 2 Vet 1 14 28 42 56 70 SE +/- 0.34, N = 3 SE +/- 0.15, N = 3 62.44 62.37
Vet 1 Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x6000852Graphics Notes: GLAMORPython Notes: Python 3.8.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 1 January 2021 16:23 by user phoronix.
Vet 2 Processor: AMD FX-8370 Eight-Core @ 4.00GHz (4 Cores / 8 Threads), Motherboard: MSI 970 GAMING (MS-7693) v4.0 (V22.3 BIOS), Chipset: AMD RD9x0/RX980, Memory: 8GB, Disk: 120GB TOSHIBA TR150, Graphics: AMD Radeon HD 5770 1GB, Audio: Realtek ALC1150, Monitor: G237HL, Network: Qualcomm Atheros Killer E220x
OS: Ubuntu 20.10, Kernel: 5.8.0-33-generic (x86_64), Desktop: GNOME Shell 3.38.1, Display Server: X Server 1.20.9, Display Driver: modesetting 1.20.9, OpenGL: 3.3 Mesa 20.2.1 (LLVM 11.0.0), Compiler: GCC 10.2.0, File-System: ext4, Screen Resolution: 1920x1080
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-10-JvwpWM/gcc-10-10.2.0/debian/tmp-gcn/usr,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x6000852Graphics Notes: GLAMORPython Notes: Python 3.8.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 2 January 2021 15:22 by user phoronix.