AMD EPYC 7F52 16-Core testing with a Supermicro H11DSi-NT v2.00 (2.1 BIOS) and llvmpipe on Ubuntu 20.04 via the Phoronix Test Suite.
EPYC 7F52 Processor: AMD EPYC 7F52 16-Core @ 3.50GHz (16 Cores / 32 Threads), Motherboard: Supermicro H11DSi-NT v2.00 (2.1 BIOS), Chipset: AMD Starship/Matisse, Memory: 64GB, Disk: 280GB INTEL SSDPE21D280GA, Graphics: llvmpipe, Monitor: VE228, Network: 2 x Intel 10G X550T
OS: Ubuntu 20.04, Kernel: 5.8.0-050800rc6daily20200721-generic (x86_64) 20200720, Desktop: GNOME Shell 3.36.1, Display Server: X Server 1.20.8, Display Driver: modesetting 1.20.8, OpenGL: 3.3 Mesa 20.0.4 (LLVM 9.0.1 128 bits), Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1920x1080
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x8301034Java Notes: OpenJDK Runtime Environment (build 11.0.7+10-post-Ubuntu-3ubuntu1)Python Notes: Python 2.7.18rc1 + Python 3.8.2Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Linux 5.10.3 OS: Ubuntu 20.04, Kernel: 5.10.3-051003-generic (x86_64), Desktop: GNOME Shell 3.36.1, Display Server: X Server 1.20.8, Display Driver: modesetting 1.20.8, OpenGL: 3.3 Mesa 20.0.4 (LLVM 9.0.1 128 bits), Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1920x1080
OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.9.3 Compression Level: 1 - Decompression Speed Linux 5.10.3 EPYC 7F52 2K 4K 6K 8K 10K SE +/- 30.69, N = 3 SE +/- 42.92, N = 3 11490.6 11455.8 1. (CC) gcc options: -O3
OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.9.3 Compression Level: 3 - Decompression Speed Linux 5.10.3 EPYC 7F52 2K 4K 6K 8K 10K SE +/- 25.66, N = 3 SE +/- 18.07, N = 8 10815.3 10768.2 1. (CC) gcc options: -O3
OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.9.3 Compression Level: 9 - Decompression Speed EPYC 7F52 Linux 5.10.3 2K 4K 6K 8K 10K SE +/- 40.67, N = 3 SE +/- 4.82, N = 3 10898.4 10851.6 1. (CC) gcc options: -O3
CLOMP CLOMP is the C version of the Livermore OpenMP benchmark developed to measure OpenMP overheads and other performance impacts due to threading in order to influence future system designs. This particular test profile configuration is currently set to look at the OpenMP static schedule speed-up across all available CPU cores using the recommended test configuration. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Speedup, More Is Better CLOMP 1.2 Static OMP Speedup Linux 5.10.3 EPYC 7F52 11 22 33 44 55 SE +/- 0.09, N = 3 SE +/- 0.21, N = 3 50.1 50.1 1. (CC) gcc options: -fopenmp -O3 -lm
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: NUMA Linux 5.10.3 EPYC 7F52 90 180 270 360 450 SE +/- 0.07, N = 3 SE +/- 2.52, N = 3 416.60 409.25 1. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: MEMFD Linux 5.10.3 EPYC 7F52 150 300 450 600 750 SE +/- 0.29, N = 3 SE +/- 0.22, N = 3 712.74 680.78 1. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Atomic EPYC 7F52 Linux 5.10.3 110K 220K 330K 440K 550K SE +/- 436.80, N = 3 SE +/- 203.22, N = 3 512936.21 510793.92 1. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Crypto EPYC 7F52 Linux 5.10.3 1000 2000 3000 4000 5000 SE +/- 0.84, N = 3 SE +/- 5.74, N = 3 4565.97 4555.43 1. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Malloc EPYC 7F52 Linux 5.10.3 70M 140M 210M 280M 350M SE +/- 811855.41, N = 3 SE +/- 693009.74, N = 3 332554816.83 332331122.53 1. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Forking EPYC 7F52 Linux 5.10.3 12K 24K 36K 48K 60K SE +/- 229.33, N = 3 SE +/- 139.19, N = 3 56181.28 44312.12 1. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: SENDFILE EPYC 7F52 Linux 5.10.3 60K 120K 180K 240K 300K SE +/- 100.47, N = 3 SE +/- 302.83, N = 3 297122.81 280154.74 1. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: CPU Cache EPYC 7F52 Linux 5.10.3 10 20 30 40 50 SE +/- 1.52, N = 12 SE +/- 1.40, N = 15 44.86 44.52 1. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: CPU Stress Linux 5.10.3 EPYC 7F52 1300 2600 3900 5200 6500 SE +/- 5.69, N = 3 SE +/- 22.12, N = 3 6266.84 6244.33 1. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Semaphores EPYC 7F52 Linux 5.10.3 500K 1000K 1500K 2000K 2500K SE +/- 14921.24, N = 3 SE +/- 2645.51, N = 3 2314681.13 2278162.65 1. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Matrix Math EPYC 7F52 Linux 5.10.3 17K 34K 51K 68K 85K SE +/- 117.38, N = 3 SE +/- 608.49, N = 3 77530.48 76518.79 1. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Vector Math EPYC 7F52 Linux 5.10.3 30K 60K 90K 120K 150K SE +/- 6.50, N = 3 SE +/- 19.82, N = 3 142981.97 142907.67 1. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Memory Copying EPYC 7F52 Linux 5.10.3 1400 2800 4200 5600 7000 SE +/- 58.89, N = 3 SE +/- 3.47, N = 3 6435.73 6274.43 1. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Socket Activity EPYC 7F52 Linux 5.10.3 2K 4K 6K 8K 10K SE +/- 43.37, N = 3 SE +/- 36.73, N = 3 10784.40 10348.91 1. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Context Switching EPYC 7F52 Linux 5.10.3 2M 4M 6M 8M 10M SE +/- 27679.79, N = 3 SE +/- 21287.84, N = 3 8409881.77 8245888.97 1. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Glibc C String Functions EPYC 7F52 Linux 5.10.3 200K 400K 600K 800K 1000K SE +/- 2051.22, N = 3 SE +/- 2853.73, N = 3 1144375.85 1143670.00 1. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: Glibc Qsort Data Sorting EPYC 7F52 Linux 5.10.3 60 120 180 240 300 SE +/- 0.99, N = 3 SE +/- 0.93, N = 3 269.57 268.94 1. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.11.07 Test: System V Message Passing EPYC 7F52 Linux 5.10.3 2M 4M 6M 8M 10M SE +/- 128008.77, N = 15 SE +/- 112749.98, N = 3 10610267.69 8395010.73 1. (CC) gcc options: -O2 -std=gnu99 -lm -laio -lbsd -lcrypt -lrt -lz -ldl -lpthread -lc
BRL-CAD BRL-CAD 7.28.0 is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.30.8 VGR Performance Metric EPYC 7F52 Linux 5.10.3 50K 100K 150K 200K 250K 245516 242323 1. (CXX) g++ options: -std=c++11 -pipe -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -rdynamic -lSM -lICE -lXi -lGLU -lGL -lGLdispatch -lX11 -lXext -lXrender -lpthread -ldl -luuid -lm
Opus Codec Encoding Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Opus Codec Encoding 1.3.1 WAV To Opus Encode Linux 5.10.3 EPYC 7F52 2 4 6 8 10 SE +/- 0.012, N = 5 SE +/- 0.016, N = 5 7.978 7.980 1. (CXX) g++ options: -fvisibility=hidden -logg -lm
ASTC Encoder ASTC Encoder (astcenc) is for the Adaptive Scalable Texture Compression (ASTC) format commonly used with OpenGL, OpenGL ES, and Vulkan graphics APIs. This test profile does a coding test of both compression/decompression. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.0 Preset: Fast EPYC 7F52 Linux 5.10.3 1.2038 2.4076 3.6114 4.8152 6.019 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 5.35 5.35 1. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.0 Preset: Medium EPYC 7F52 Linux 5.10.3 2 4 6 8 10 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 6.89 6.91 1. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.0 Preset: Thorough EPYC 7F52 Linux 5.10.3 4 8 12 16 20 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 13.79 13.79 1. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.0 Preset: Exhaustive Linux 5.10.3 EPYC 7F52 20 40 60 80 100 SE +/- 0.12, N = 3 SE +/- 0.13, N = 3 108.81 108.83 1. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
Hugin Hugin is an open-source, cross-platform panorama photo stitcher software package. This test profile times how long it takes to run the assistant and panorama photo stitching on a set of images. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Hugin Panorama Photo Assistant + Stitching Time Linux 5.10.3 EPYC 7F52 11 22 33 44 55 SE +/- 0.25, N = 3 SE +/- 0.07, N = 3 50.68 50.70
WebP Image Encode This is a test of Google's libwebp with the cwebp image encode utility and using a sample 6000x4000 pixel JPEG image as the input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Default EPYC 7F52 Linux 5.10.3 0.3641 0.7282 1.0923 1.4564 1.8205 SE +/- 0.001, N = 3 SE +/- 0.001, N = 3 1.618 1.618 1. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100 Linux 5.10.3 EPYC 7F52 0.5621 1.1242 1.6863 2.2484 2.8105 SE +/- 0.000, N = 3 SE +/- 0.001, N = 3 2.491 2.498 1. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless EPYC 7F52 Linux 5.10.3 4 8 12 16 20 SE +/- 0.07, N = 3 SE +/- 0.08, N = 3 17.50 17.57 1. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Highest Compression Linux 5.10.3 EPYC 7F52 2 4 6 8 10 SE +/- 0.006, N = 3 SE +/- 0.007, N = 3 7.716 7.732 1. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OpenBenchmarking.org Encode Time - Seconds, Fewer Is Better WebP Image Encode 1.1 Encode Settings: Quality 100, Lossless, Highest Compression Linux 5.10.3 EPYC 7F52 8 16 24 32 40 SE +/- 0.05, N = 3 SE +/- 0.04, N = 3 36.28 36.31 1. (CC) gcc options: -fvisibility=hidden -O2 -pthread -lm -ljpeg -lpng16 -ltiff
OCRMyPDF OCRMyPDF is an optical character recognition (OCR) text layer to scanned PDF files, producing new PDFs with the text now selectable/searchable/copy-paste capable. OCRMyPDF leverages the Tesseract OCR engine and is written in Python. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OCRMyPDF 9.6.0+dfsg Processing 60 Page PDF Document EPYC 7F52 Linux 5.10.3 5 10 15 20 25 SE +/- 0.07, N = 3 SE +/- 0.04, N = 3 19.52 19.55
Darmstadt Automotive Parallel Heterogeneous Suite DAPHNE is the Darmstadt Automotive Parallel HeterogeNEous Benchmark Suite with OpenCL / CUDA / OpenMP test cases for these automotive benchmarks for evaluating programming models in context to vehicle autonomous driving capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Test Cases Per Minute, More Is Better Darmstadt Automotive Parallel Heterogeneous Suite Backend: OpenMP - Kernel: NDT Mapping EPYC 7F52 Linux 5.10.3 200 400 600 800 1000 SE +/- 3.32, N = 3 SE +/- 6.61, N = 3 977.65 969.38 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp
OpenBenchmarking.org FPS, More Is Better PlaidML FP16: No - Mode: Inference - Network: VGG19 - Device: CPU Linux 5.10.3 EPYC 7F52 5 10 15 20 25 SE +/- 0.10, N = 3 SE +/- 0.05, N = 3 20.69 20.27
OpenBenchmarking.org FPS, More Is Better PlaidML FP16: No - Mode: Inference - Network: IMDB LSTM - Device: CPU Linux 5.10.3 EPYC 7F52 140 280 420 560 700 SE +/- 2.01, N = 3 SE +/- 2.99, N = 3 666.79 665.88
OpenBenchmarking.org FPS, More Is Better PlaidML FP16: No - Mode: Inference - Network: Mobilenet - Device: CPU EPYC 7F52 Linux 5.10.3 4 8 12 16 20 SE +/- 0.10, N = 3 SE +/- 0.09, N = 3 14.53 14.51
OpenBenchmarking.org FPS, More Is Better PlaidML FP16: No - Mode: Inference - Network: ResNet 50 - Device: CPU EPYC 7F52 Linux 5.10.3 2 4 6 8 10 SE +/- 0.00, N = 3 SE +/- 0.08, N = 3 6.14 5.96
OpenBenchmarking.org FPS, More Is Better PlaidML FP16: No - Mode: Inference - Network: DenseNet 201 - Device: CPU Linux 5.10.3 EPYC 7F52 0.7223 1.4446 2.1669 2.8892 3.6115 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 3.21 3.19
OpenBenchmarking.org FPS, More Is Better PlaidML FP16: No - Mode: Inference - Network: Inception V3 - Device: CPU EPYC 7F52 Linux 5.10.3 3 6 9 12 15 SE +/- 0.00, N = 3 SE +/- 0.04, N = 3 10.39 10.17
OpenBenchmarking.org FPS, More Is Better PlaidML FP16: No - Mode: Inference - Network: NASNer Large - Device: CPU Linux 5.10.3 EPYC 7F52 0.234 0.468 0.702 0.936 1.17 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 1.04 1.04
Numenta Anomaly Benchmark Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: EXPoSE EPYC 7F52 Linux 5.10.3 200 400 600 800 1000 SE +/- 2.18, N = 3 SE +/- 0.49, N = 3 756.88 778.16
DeepSpeech Mozilla DeepSpeech is a speech-to-text engine powered by TensorFlow for machine learning and derived from Baidu's Deep Speech research paper. This test profile times the speech-to-text process for a roughly three minute audio recording. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better DeepSpeech 0.6 Acceleration: CPU Linux 5.10.3 EPYC 7F52 15 30 45 60 75 SE +/- 0.08, N = 3 SE +/- 0.20, N = 3 68.17 68.29
RNNoise RNNoise is a recurrent neural network for audio noise reduction developed by Mozilla and Xiph.Org. This test profile is a single-threaded test measuring the time to denoise a sample 26 minute long 16-bit RAW audio file using this recurrent neural network noise suppression library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better RNNoise 2020-06-28 Linux 5.10.3 EPYC 7F52 5 10 15 20 25 SE +/- 0.02, N = 3 SE +/- 0.00, N = 3 20.10 20.13 1. (CC) gcc options: -O2 -pedantic -fvisibility=hidden
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by ALibaba. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: SqueezeNetV1.0 Linux 5.10.3 EPYC 7F52 3 6 9 12 15 SE +/- 0.11, N = 15 SE +/- 0.24, N = 15 10.29 10.93 MIN: 9.72 / MAX: 23.37 MIN: 9.63 / MAX: 23.96 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: resnet-v2-50 Linux 5.10.3 EPYC 7F52 8 16 24 32 40 SE +/- 0.04, N = 15 SE +/- 0.05, N = 15 33.96 34.55 MIN: 32.06 / MAX: 51.84 MIN: 32.75 / MAX: 67.95 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: MobileNetV2_224 Linux 5.10.3 EPYC 7F52 2 4 6 8 10 SE +/- 0.012, N = 15 SE +/- 0.012, N = 15 6.126 6.208 MIN: 5.97 / MAX: 20.61 MIN: 6.01 / MAX: 21 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: mobilenet-v1-1.0 Linux 5.10.3 EPYC 7F52 2 4 6 8 10 SE +/- 0.007, N = 15 SE +/- 0.012, N = 15 6.551 6.575 MIN: 6.45 / MAX: 22.13 MIN: 6.41 / MAX: 20.06 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2020-09-17 Model: inception-v3 Linux 5.10.3 EPYC 7F52 8 16 24 32 40 SE +/- 0.18, N = 15 SE +/- 0.23, N = 15 32.96 33.53 MIN: 31.71 / MAX: 49.44 MIN: 31.39 / MAX: 50.33 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
TNN TNN is an open-source deep learning reasoning framework developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better TNN 0.2.3 Target: CPU - Model: MobileNet v2 EPYC 7F52 Linux 5.10.3 60 120 180 240 300 SE +/- 0.53, N = 3 SE +/- 0.42, N = 3 274.97 275.51 MIN: 272.73 / MAX: 289.81 MIN: 272.98 / MAX: 294.91 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
OpenBenchmarking.org ms, Fewer Is Better TNN 0.2.3 Target: CPU - Model: SqueezeNet v1.1 EPYC 7F52 Linux 5.10.3 60 120 180 240 300 SE +/- 0.77, N = 3 SE +/- 0.25, N = 3 263.04 264.36 MIN: 260.98 / MAX: 265.86 MIN: 261.25 / MAX: 266.06 1. (CXX) g++ options: -fopenmp -pthread -fvisibility=hidden -O3 -rdynamic -ldl
Caffe This is a benchmark of the Caffe deep learning framework and currently supports the AlexNet and Googlenet model and execution on both CPUs and NVIDIA GPUs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 100 EPYC 7F52 Linux 5.10.3 16K 32K 48K 64K 80K SE +/- 136.23, N = 3 SE +/- 793.90, N = 3 71667 72605 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: AlexNet - Acceleration: CPU - Iterations: 200 EPYC 7F52 Linux 5.10.3 30K 60K 90K 120K 150K SE +/- 359.19, N = 3 SE +/- 381.87, N = 3 143622 144039 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 100 Linux 5.10.3 EPYC 7F52 40K 80K 120K 160K 200K SE +/- 172.36, N = 3 SE +/- 222.17, N = 3 181008 181652 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
OpenBenchmarking.org Milli-Seconds, Fewer Is Better Caffe 2020-02-13 Model: GoogleNet - Acceleration: CPU - Iterations: 200 Linux 5.10.3 EPYC 7F52 80K 160K 240K 320K 400K SE +/- 312.30, N = 3 SE +/- 75.84, N = 3 362841 363998 1. (CXX) g++ options: -fPIC -O3 -rdynamic -lglog -lgflags -lprotobuf -lpthread -lsz -lz -ldl -lm -llmdb -lopenblas
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: mobilenet EPYC 7F52 Linux 5.10.3 5 10 15 20 25 SE +/- 0.15, N = 15 SE +/- 0.27, N = 12 19.27 19.55 MIN: 17.82 / MAX: 79.15 MIN: 17.94 / MAX: 34.33 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU-v2-v2 - Model: mobilenet-v2 Linux 5.10.3 EPYC 7F52 2 4 6 8 10 SE +/- 0.07, N = 12 SE +/- 0.04, N = 15 8.44 8.49 MIN: 6.92 / MAX: 12.58 MIN: 7.06 / MAX: 72.31 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU-v3-v3 - Model: mobilenet-v3 EPYC 7F52 Linux 5.10.3 2 4 6 8 10 SE +/- 0.02, N = 15 SE +/- 0.02, N = 12 7.68 7.73 MIN: 7.21 / MAX: 12.54 MIN: 7.28 / MAX: 11.83 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: shufflenet-v2 Linux 5.10.3 EPYC 7F52 3 6 9 12 15 SE +/- 0.02, N = 12 SE +/- 0.02, N = 15 8.97 8.98 MIN: 8.55 / MAX: 22.64 MIN: 8.73 / MAX: 14.04 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: mnasnet EPYC 7F52 Linux 5.10.3 2 4 6 8 10 SE +/- 0.02, N = 15 SE +/- 0.02, N = 12 7.60 7.60 MIN: 6.99 / MAX: 10.58 MIN: 7.34 / MAX: 8.93 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: efficientnet-b0 EPYC 7F52 Linux 5.10.3 3 6 9 12 15 SE +/- 0.03, N = 15 SE +/- 0.03, N = 12 11.06 11.14 MIN: 10.67 / MAX: 13.4 MIN: 10.78 / MAX: 14.68 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: blazeface Linux 5.10.3 EPYC 7F52 0.8303 1.6606 2.4909 3.3212 4.1515 SE +/- 0.02, N = 12 SE +/- 0.02, N = 15 3.67 3.69 MIN: 3.53 / MAX: 4.35 MIN: 3.52 / MAX: 75.15 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: googlenet EPYC 7F52 Linux 5.10.3 4 8 12 16 20 SE +/- 0.06, N = 15 SE +/- 0.14, N = 12 17.65 17.70 MIN: 17.22 / MAX: 117.52 MIN: 17.12 / MAX: 260.94 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: vgg16 Linux 5.10.3 EPYC 7F52 7 14 21 28 35 SE +/- 0.04, N = 12 SE +/- 0.03, N = 15 30.02 30.17 MIN: 29.27 / MAX: 43.79 MIN: 29.55 / MAX: 90.42 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: resnet18 EPYC 7F52 Linux 5.10.3 3 6 9 12 15 SE +/- 0.03, N = 15 SE +/- 0.04, N = 12 10.69 10.71 MIN: 10.34 / MAX: 13.84 MIN: 10.34 / MAX: 64.22 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: alexnet Linux 5.10.3 EPYC 7F52 2 4 6 8 10 SE +/- 0.09, N = 12 SE +/- 0.08, N = 15 7.01 7.03 MIN: 6.57 / MAX: 10.41 MIN: 6.6 / MAX: 43.31 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: resnet50 Linux 5.10.3 EPYC 7F52 5 10 15 20 25 SE +/- 0.04, N = 12 SE +/- 0.05, N = 15 20.94 21.34 MIN: 20.35 / MAX: 23.55 MIN: 20.69 / MAX: 102.24 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: yolov4-tiny Linux 5.10.3 EPYC 7F52 6 12 18 24 30 SE +/- 0.21, N = 12 SE +/- 0.13, N = 15 25.84 25.94 MIN: 24.84 / MAX: 30.66 MIN: 25.13 / MAX: 86.32 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: squeezenet_ssd Linux 5.10.3 EPYC 7F52 5 10 15 20 25 SE +/- 0.23, N = 12 SE +/- 0.04, N = 15 21.06 21.89 MIN: 19.62 / MAX: 77.67 MIN: 21.44 / MAX: 101.39 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20201218 Target: CPU - Model: regnety_400m EPYC 7F52 Linux 5.10.3 10 20 30 40 50 SE +/- 0.18, N = 15 SE +/- 0.14, N = 12 44.51 44.79 MIN: 42.64 / MAX: 117.01 MIN: 43.38 / MAX: 124.54 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org Seconds, Fewer Is Better Mlpack Benchmark Benchmark: scikit_linearridgeregression Linux 5.10.3 EPYC 7F52 0.3893 0.7786 1.1679 1.5572 1.9465 SE +/- 0.01, N = 3 SE +/- 0.02, N = 4 1.72 1.73
NAMD NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org days/ns, Fewer Is Better NAMD 2.14 ATPase Simulation - 327,506 Atoms EPYC 7F52 Linux 5.10.3 0.2583 0.5166 0.7749 1.0332 1.2915 SE +/- 0.00082, N = 3 SE +/- 0.00649, N = 3 1.14226 1.14801
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU EPYC 7F52 Linux 5.10.3 0.5418 1.0836 1.6254 2.1672 2.709 SE +/- 0.01148, N = 3 SE +/- 0.02630, N = 5 2.00519 2.40810 MIN: 1.87 MIN: 2.25 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU EPYC 7F52 Linux 5.10.3 0.707 1.414 2.121 2.828 3.535 SE +/- 0.01483, N = 3 SE +/- 0.00548, N = 3 2.36752 3.14201 MIN: 2.3 MIN: 3.1 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU EPYC 7F52 Linux 5.10.3 0.3425 0.685 1.0275 1.37 1.7125 SE +/- 0.00269, N = 3 SE +/- 0.00375, N = 3 1.51115 1.52220 MIN: 1.48 MIN: 1.49 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU EPYC 7F52 Linux 5.10.3 0.3006 0.6012 0.9018 1.2024 1.503 SE +/- 0.010953, N = 3 SE +/- 0.004643, N = 3 0.772058 1.335930 MIN: 0.72 MIN: 1.28 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU EPYC 7F52 Linux 5.10.3 1.0771 2.1542 3.2313 4.3084 5.3855 SE +/- 0.01620, N = 3 SE +/- 0.02803, N = 3 3.29403 4.78691 MIN: 3.12 MIN: 4.62 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU EPYC 7F52 Linux 5.10.3 0.7328 1.4656 2.1984 2.9312 3.664 SE +/- 0.01184, N = 3 SE +/- 0.04703, N = 15 2.76321 3.25673 MIN: 2.65 MIN: 2.89 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
Open Porous Media This is a test of Open Porous Media, a set of open-source tools concerning simulation of flow and transport of fluids in porous media. This test profile depends upon MPI/Flow already being installed on the system. Install instructions at https://opm-project.org/?page_id=36. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media OPM Benchmark: Flow MPI Norne - Threads: 8 Linux 5.10.3 EPYC 7F52 50 100 150 200 250 SE +/- 0.17, N = 3 SE +/- 0.40, N = 3 208.91 217.41 1. flow 2020.04
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media OPM Benchmark: Flow MPI Norne - Threads: 16 Linux 5.10.3 EPYC 7F52 80 160 240 320 400 SE +/- 0.13, N = 3 SE +/- 0.72, N = 3 348.11 361.92 1. flow 2020.04
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media OPM Benchmark: Flow MPI Norne - Threads: 2 EPYC 7F52 Linux 5.10.3 50 100 150 200 250 SE +/- 0.16, N = 3 SE +/- 0.34, N = 3 212.22 212.31 1. flow 2020.04
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media OPM Benchmark: Flow MPI Norne - Threads: 4 Linux 5.10.3 EPYC 7F52 40 80 120 160 200 SE +/- 0.30, N = 3 SE +/- 0.45, N = 3 166.54 168.76 1. flow 2020.04
OpenBenchmarking.org Seconds, Fewer Is Better Open Porous Media OPM Benchmark: Flow MPI Norne - Threads: 1 Linux 5.10.3 EPYC 7F52 80 160 240 320 400 SE +/- 0.73, N = 3 SE +/- 1.92, N = 3 364.94 365.37 1. flow 2020.04
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI initiative. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU EPYC 7F52 Linux 5.10.3 1.0658 2.1316 3.1974 4.2632 5.329 SE +/- 0.05484, N = 3 SE +/- 0.04867, N = 15 4.03801 4.73706 MIN: 3.84 MIN: 4.42 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU EPYC 7F52 Linux 5.10.3 2 4 6 8 10 SE +/- 0.07217, N = 3 SE +/- 0.01174, N = 3 5.55682 6.22877 MIN: 5.13 MIN: 6.14 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU EPYC 7F52 Linux 5.10.3 1.2437 2.4874 3.7311 4.9748 6.2185 SE +/- 0.02675, N = 3 SE +/- 0.01913, N = 3 5.52280 5.52758 MIN: 5.32 MIN: 5.38 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU EPYC 7F52 Linux 5.10.3 0.6688 1.3376 2.0064 2.6752 3.344 SE +/- 0.00136, N = 3 SE +/- 0.00487, N = 3 2.86890 2.97264 MIN: 2.83 MIN: 2.93 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU EPYC 7F52 Linux 5.10.3 500 1000 1500 2000 2500 SE +/- 1.98, N = 3 SE +/- 10.97, N = 3 2006.80 2220.50 MIN: 1996.47 MIN: 2191.85 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU EPYC 7F52 Linux 5.10.3 300 600 900 1200 1500 SE +/- 1.05, N = 3 SE +/- 1.69, N = 3 1068.48 1169.57 MIN: 1062.66 MIN: 1161.34 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU EPYC 7F52 Linux 5.10.3 500 1000 1500 2000 2500 SE +/- 6.20, N = 3 SE +/- 4.81, N = 3 1992.60 2211.73 MIN: 1974.03 MIN: 2193.8 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU EPYC 7F52 Linux 5.10.3 200 400 600 800 1000 SE +/- 2.50, N = 3 SE +/- 11.57, N = 3 1057.42 1148.93 MIN: 1047.72 MIN: 1133.53 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU EPYC 7F52 Linux 5.10.3 0.2108 0.4216 0.6324 0.8432 1.054 SE +/- 0.003167, N = 3 SE +/- 0.009336, N = 3 0.675915 0.937089 MIN: 0.64 MIN: 0.89 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU EPYC 7F52 Linux 5.10.3 500 1000 1500 2000 2500 SE +/- 7.07, N = 3 SE +/- 9.42, N = 3 1994.62 2192.82 MIN: 1976.29 MIN: 2169.84 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU EPYC 7F52 Linux 5.10.3 300 600 900 1200 1500 SE +/- 1.57, N = 3 SE +/- 9.31, N = 3 1069.39 1162.78 MIN: 1062.1 MIN: 1139.6 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU Linux 5.10.3 EPYC 7F52 0.4136 0.8272 1.2408 1.6544 2.068 SE +/- 0.00255, N = 3 SE +/- 0.00177, N = 3 1.82157 1.83844 MIN: 1.78 MIN: 1.81 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2021.1 Model: Face Detection 0106 FP16 - Device: CPU EPYC 7F52 Linux 5.10.3 0.9045 1.809 2.7135 3.618 4.5225 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 4.02 4.01 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2021.1 Model: Face Detection 0106 FP16 - Device: CPU EPYC 7F52 Linux 5.10.3 400 800 1200 1600 2000 SE +/- 1.96, N = 3 SE +/- 2.02, N = 3 1988.75 1988.90 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.org FPS, More Is Better OpenVINO 2021.1 Model: Face Detection 0106 FP32 - Device: CPU Linux 5.10.3 EPYC 7F52 0.9023 1.8046 2.7069 3.6092 4.5115 SE +/- 0.00, N = 3 SE +/- 0.01, N = 3 4.01 4.01 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2021.1 Model: Face Detection 0106 FP32 - Device: CPU EPYC 7F52 Linux 5.10.3 400 800 1200 1600 2000 SE +/- 2.66, N = 3 SE +/- 2.14, N = 3 1986.91 1989.91 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.org FPS, More Is Better OpenVINO 2021.1 Model: Person Detection 0106 FP16 - Device: CPU Linux 5.10.3 EPYC 7F52 0.6908 1.3816 2.0724 2.7632 3.454 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 3.07 3.06 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2021.1 Model: Person Detection 0106 FP16 - Device: CPU EPYC 7F52 Linux 5.10.3 600 1200 1800 2400 3000 SE +/- 1.99, N = 3 SE +/- 3.43, N = 3 2582.88 2590.33 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.org FPS, More Is Better OpenVINO 2021.1 Model: Person Detection 0106 FP32 - Device: CPU EPYC 7F52 Linux 5.10.3 0.684 1.368 2.052 2.736 3.42 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 3.04 3.03 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2021.1 Model: Person Detection 0106 FP32 - Device: CPU EPYC 7F52 Linux 5.10.3 600 1200 1800 2400 3000 SE +/- 3.48, N = 3 SE +/- 2.25, N = 3 2600.35 2605.51 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.org FPS, More Is Better OpenVINO 2021.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU EPYC 7F52 Linux 5.10.3 2K 4K 6K 8K 10K SE +/- 6.06, N = 3 SE +/- 5.41, N = 3 9974.70 9966.93 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2021.1 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU EPYC 7F52 Linux 5.10.3 0.1755 0.351 0.5265 0.702 0.8775 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.78 0.78 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.org FPS, More Is Better OpenVINO 2021.1 Model: Age Gender Recognition Retail 0013 FP32 - Device: CPU Linux 5.10.3 EPYC 7F52 2K 4K 6K 8K 10K SE +/- 5.75, N = 3 SE +/- 16.24, N = 3 9953.07 9935.55 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2021.1 Model: Age Gender Recognition Retail 0013 FP32 - Device: CPU Linux 5.10.3 EPYC 7F52 0.1778 0.3556 0.5334 0.7112 0.889 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.78 0.79 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -pie -pthread -lpthread
FFTE FFTE is a package by Daisuke Takahashi to compute Discrete Fourier Transforms of 1-, 2- and 3- dimensional sequences of length (2^p)*(3^q)*(5^r). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MFLOPS, More Is Better FFTE 7.0 N=256, 3D Complex FFT Routine Linux 5.10.3 EPYC 7F52 20K 40K 60K 80K 100K SE +/- 226.53, N = 3 SE +/- 100.09, N = 3 100233.06 99888.44 1. (F9X) gfortran options: -O3 -fomit-frame-pointer -fopenmp
Monte Carlo Simulations of Ionised Nebulae Mocassin is the Monte Carlo Simulations of Ionised Nebulae. MOCASSIN is a fully 3D or 2D photoionisation and dust radiative transfer code which employs a Monte Carlo approach to the transfer of radiation through media of arbitrary geometry and density distribution. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Monte Carlo Simulations of Ionised Nebulae 2019-03-24 Input: Dust 2D tau100.0 EPYC 7F52 Linux 5.10.3 40 80 120 160 200 192 192 1. (F9X) gfortran options: -cpp -Jsource/ -ffree-line-length-0 -lm -std=legacy -O3 -O2 -pthread -lmpi_usempif08 -lmpi_mpifh -lmpi
GPAW GPAW is a density-functional theory (DFT) Python code based on the projector-augmented wave (PAW) method and the atomic simulation environment (ASE). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better GPAW 20.1 Input: Carbon Nanotube Linux 5.10.3 EPYC 7F52 30 60 90 120 150 SE +/- 0.05, N = 3 SE +/- 1.45, N = 4 114.13 117.38 1. (CC) gcc options: -pthread -shared -fwrapv -O2 -lxc -lblas -lmpi
Sunflow Rendering System This test runs benchmarks of the Sunflow Rendering System. The Sunflow Rendering System is an open-source render engine for photo-realistic image synthesis with a ray-tracing core. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Sunflow Rendering System 0.07.2 Global Illumination + Image Synthesis Linux 5.10.3 EPYC 7F52 0.1845 0.369 0.5535 0.738 0.9225 SE +/- 0.013, N = 15 SE +/- 0.008, N = 3 0.818 0.820 MIN: 0.58 / MAX: 1.49 MIN: 0.56 / MAX: 1.43
WireGuard + Linux Networking Stack Stress Test This is a benchmark of the WireGuard secure VPN tunnel and Linux networking stack stress test. The test runs on the local host but does require root permissions to run. The way it works is it creates three namespaces. ns0 has a loopback device. ns1 and ns2 each have wireguard devices. Those two wireguard devices send traffic through the loopback device of ns0. The end result of this is that tests wind up testing encryption and decryption at the same time -- a pretty CPU and scheduler-heavy workflow. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better WireGuard + Linux Networking Stack Stress Test EPYC 7F52 Linux 5.10.3 70 140 210 280 350 SE +/- 0.38, N = 3 SE +/- 1.23, N = 3 293.87 301.90
Stockfish This is a test of Stockfish, an advanced C++11 chess benchmark that can scale up to 128 CPU cores. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 12 Total Time EPYC 7F52 Linux 5.10.3 8M 16M 24M 32M 40M SE +/- 300939.62, N = 3 SE +/- 178225.83, N = 3 36388251 36383816 1. (CXX) g++ options: -m64 -lpthread -fno-exceptions -std=c++17 -pedantic -O3 -msse -msse3 -mpopcnt -msse4.1 -mssse3 -msse2 -flto -flto=jobserver
OpenBenchmarking.org Real C/S, More Is Better John The Ripper 1.9.0-jumbo-1 Test: MD5 Linux 5.10.3 EPYC 7F52 400K 800K 1200K 1600K 2000K SE +/- 3179.80, N = 3 SE +/- 2962.73, N = 3 1728667 1726333 1. (CC) gcc options: -m64 -lssl -lcrypto -fopenmp -lgmp -pthread -lm -lz -ldl -lcrypt -lbz2
Kvazaar This is a test of Kvazaar as a CPU-based H.265 video encoder written in the C programming language and optimized in Assembly. Kvazaar is the winner of the 2016 ACM Open-Source Software Competition and developed at the Ultra Video Group, Tampere University, Finland. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.0 Video Input: Bosphorus 4K - Video Preset: Slow Linux 5.10.3 EPYC 7F52 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 10.10 10.06 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.0 Video Input: Bosphorus 4K - Video Preset: Medium Linux 5.10.3 EPYC 7F52 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 10.31 10.24 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.0 Video Input: Bosphorus 1080p - Video Preset: Slow Linux 5.10.3 EPYC 7F52 8 16 24 32 40 SE +/- 0.02, N = 3 SE +/- 0.02, N = 3 35.35 35.05 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.0 Video Input: Bosphorus 1080p - Video Preset: Medium Linux 5.10.3 EPYC 7F52 8 16 24 32 40 SE +/- 0.15, N = 3 SE +/- 0.02, N = 3 36.27 35.97 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.0 Video Input: Bosphorus 4K - Video Preset: Very Fast Linux 5.10.3 EPYC 7F52 6 12 18 24 30 SE +/- 0.02, N = 3 SE +/- 0.03, N = 3 24.56 24.44 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.0 Video Input: Bosphorus 4K - Video Preset: Ultra Fast Linux 5.10.3 EPYC 7F52 9 18 27 36 45 SE +/- 0.05, N = 3 SE +/- 0.06, N = 3 41.27 40.41 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.0 Video Input: Bosphorus 1080p - Video Preset: Very Fast Linux 5.10.3 EPYC 7F52 16 32 48 64 80 SE +/- 0.30, N = 3 SE +/- 0.10, N = 3 71.05 68.39 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.org Frames Per Second, More Is Better Kvazaar 2.0 Video Input: Bosphorus 1080p - Video Preset: Ultra Fast Linux 5.10.3 EPYC 7F52 20 40 60 80 100 SE +/- 0.45, N = 3 SE +/- 0.58, N = 3 110.36 105.12 1. (CC) gcc options: -pthread -ftree-vectorize -fvisibility=hidden -O2 -lpthread -lm -lrt
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 2.0 Encoder Mode: Speed 4 Two-Pass EPYC 7F52 Linux 5.10.3 0.5445 1.089 1.6335 2.178 2.7225 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 2.42 2.41 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 2.0 Encoder Mode: Speed 6 Realtime Linux 5.10.3 EPYC 7F52 5 10 15 20 25 SE +/- 0.08, N = 3 SE +/- 0.09, N = 3 19.40 19.16 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 2.0 Encoder Mode: Speed 6 Two-Pass Linux 5.10.3 EPYC 7F52 0.8438 1.6876 2.5314 3.3752 4.219 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 3.75 3.74 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 2.0 Encoder Mode: Speed 8 Realtime EPYC 7F52 Linux 5.10.3 8 16 24 32 40 SE +/- 0.23, N = 3 SE +/- 0.06, N = 3 34.12 33.98 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better VP9 libvpx Encoding 1.8.2 Speed: Speed 5 Linux 5.10.3 EPYC 7F52 6 12 18 24 30 SE +/- 0.10, N = 3 SE +/- 0.05, N = 3 23.40 23.08 1. (CXX) g++ options: -m64 -lm -lpthread -O3 -fPIC -U_FORTIFY_SOURCE -std=c++11
GraphicsMagick This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Swirl EPYC 7F52 Linux 5.10.3 200 400 600 800 1000 SE +/- 1.20, N = 3 SE +/- 0.33, N = 3 896 895 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Rotate EPYC 7F52 Linux 5.10.3 130 260 390 520 650 SE +/- 5.81, N = 3 SE +/- 4.04, N = 3 619 614 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Sharpen Linux 5.10.3 EPYC 7F52 50 100 150 200 250 SE +/- 0.33, N = 3 SE +/- 0.33, N = 3 235 235 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Enhanced Linux 5.10.3 EPYC 7F52 80 160 240 320 400 SE +/- 0.33, N = 3 SE +/- 0.33, N = 3 374 374 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Resizing EPYC 7F52 Linux 5.10.3 300 600 900 1200 1500 SE +/- 18.67, N = 3 SE +/- 9.84, N = 3 1597 1591 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: Noise-Gaussian Linux 5.10.3 EPYC 7F52 90 180 270 360 450 SE +/- 0.33, N = 3 SE +/- 0.33, N = 3 428 419 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.33 Operation: HWB Color Space Linux 5.10.3 EPYC 7F52 300 600 900 1200 1500 SE +/- 1.86, N = 3 SE +/- 1.33, N = 3 1253 1171 1. (CC) gcc options: -fopenmp -O2 -pthread -ljbig -lwebp -lwebpmux -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
SVT-VP9 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-VP9 CPU-based multi-threaded video encoder for the VP9 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.1 Tuning: VMAF Optimized - Input: Bosphorus 1080p Linux 5.10.3 EPYC 7F52 60 120 180 240 300 SE +/- 2.05, N = 3 SE +/- 0.71, N = 3 255.72 248.38 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.1 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p Linux 5.10.3 EPYC 7F52 60 120 180 240 300 SE +/- 0.75, N = 3 SE +/- 0.99, N = 3 264.01 252.18 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.1 Tuning: Visual Quality Optimized - Input: Bosphorus 1080p Linux 5.10.3 EPYC 7F52 50 100 150 200 250 SE +/- 0.66, N = 3 SE +/- 1.03, N = 3 212.07 203.98 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
x264 This is a simple test of the x264 encoder run on the CPU (OpenCL support disabled) with a sample video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better x264 2019-12-17 H.264 Video Encoding Linux 5.10.3 EPYC 7F52 40 80 120 160 200 SE +/- 0.78, N = 3 SE +/- 1.00, N = 3 163.65 162.77 1. (CC) gcc options: -ldl -lavformat -lavcodec -lavutil -lswscale -m64 -lm -lpthread -O3 -ffast-math -std=gnu99 -fPIC -fomit-frame-pointer -fno-tree-vectorize
dav1d Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better dav1d 0.7.0 Video Input: Chimera 1080p Linux 5.10.3 EPYC 7F52 130 260 390 520 650 SE +/- 1.10, N = 3 SE +/- 1.17, N = 3 581.26 574.78 MIN: 460.79 / MAX: 716.22 MIN: 454.24 / MAX: 710.14 1. (CC) gcc options: -pthread
OpenBenchmarking.org FPS, More Is Better dav1d 0.7.0 Video Input: Summer Nature 4K EPYC 7F52 Linux 5.10.3 50 100 150 200 250 SE +/- 0.89, N = 3 SE +/- 0.24, N = 3 227.67 227.34 MIN: 160.75 / MAX: 250.13 MIN: 166.45 / MAX: 246.55 1. (CC) gcc options: -pthread
OpenBenchmarking.org FPS, More Is Better dav1d 0.7.0 Video Input: Summer Nature 1080p Linux 5.10.3 EPYC 7F52 120 240 360 480 600 SE +/- 2.05, N = 3 SE +/- 1.44, N = 3 541.83 533.80 MIN: 374.84 / MAX: 590.34 MIN: 341.27 / MAX: 581.44 1. (CC) gcc options: -pthread
OpenBenchmarking.org FPS, More Is Better dav1d 0.7.0 Video Input: Chimera 1080p 10-bit Linux 5.10.3 EPYC 7F52 20 40 60 80 100 SE +/- 0.07, N = 3 SE +/- 0.05, N = 3 111.44 110.64 MIN: 74.8 / MAX: 220.43 MIN: 74.39 / MAX: 217.07 1. (CC) gcc options: -pthread
SVT-AV1 This is a test of the Intel Open Visual Cloud Scalable Video Technology SVT-AV1 CPU-based multi-threaded video encoder for the AV1 video format with a sample 1080p YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 0.8 Encoder Mode: Enc Mode 0 - Input: 1080p EPYC 7F52 Linux 5.10.3 0.0263 0.0526 0.0789 0.1052 0.1315 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 0.117 0.116 1. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 0.8 Encoder Mode: Enc Mode 4 - Input: 1080p Linux 5.10.3 EPYC 7F52 1.2123 2.4246 3.6369 4.8492 6.0615 SE +/- 0.009, N = 3 SE +/- 0.023, N = 3 5.388 5.360 1. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 0.8 Encoder Mode: Enc Mode 8 - Input: 1080p Linux 5.10.3 EPYC 7F52 9 18 27 36 45 SE +/- 0.06, N = 3 SE +/- 0.07, N = 3 39.01 38.53 1. (CXX) g++ options: -O3 -fcommon -fPIE -fPIC -pie
x265 This is a simple test of the x265 encoder run on the CPU with 1080p and 4K options for H.265 video encode performance with x265. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better x265 3.4 Video Input: Bosphorus 4K Linux 5.10.3 EPYC 7F52 5 10 15 20 25 SE +/- 0.03, N = 3 SE +/- 0.07, N = 3 21.22 20.93 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Frames Per Second, More Is Better x265 3.4 Video Input: Bosphorus 1080p Linux 5.10.3 EPYC 7F52 14 28 42 56 70 SE +/- 0.12, N = 3 SE +/- 0.06, N = 3 62.27 61.76 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.90 Blend File: Classroom - Compute: CPU-Only Linux 5.10.3 EPYC 7F52 50 100 150 200 250 SE +/- 0.12, N = 3 SE +/- 0.22, N = 3 239.22 239.80
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.90 Blend File: Fishy Cat - Compute: CPU-Only EPYC 7F52 Linux 5.10.3 20 40 60 80 100 SE +/- 0.29, N = 3 SE +/- 0.08, N = 3 108.00 108.33
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.90 Blend File: Barbershop - Compute: CPU-Only EPYC 7F52 Linux 5.10.3 80 160 240 320 400 SE +/- 0.32, N = 3 SE +/- 0.30, N = 3 354.80 355.80
OpenBenchmarking.org Seconds, Fewer Is Better Blender 2.90 Blend File: Pabellon Barcelona - Compute: CPU-Only EPYC 7F52 Linux 5.10.3 60 120 180 240 300 SE +/- 1.48, N = 3 SE +/- 0.33, N = 3 266.54 266.70
OpenBenchmarking.org Frames Per Second, More Is Better rav1e 0.4 Alpha Speed: 10 Linux 5.10.3 EPYC 7F52 0.7182 1.4364 2.1546 2.8728 3.591 SE +/- 0.003, N = 3 SE +/- 0.002, N = 3 3.192 3.186
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.9.0 Binary: Pathtracer ISPC - Model: Crown EPYC 7F52 Linux 5.10.3 5 10 15 20 25 SE +/- 0.12, N = 3 SE +/- 0.16, N = 3 18.77 18.62 MIN: 18.46 / MAX: 19.39 MIN: 17.94 / MAX: 19.11
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.9.0 Binary: Pathtracer - Model: Asian Dragon Linux 5.10.3 EPYC 7F52 5 10 15 20 25 SE +/- 0.21, N = 3 SE +/- 0.05, N = 3 21.06 20.97 MIN: 20.53 / MAX: 22.46 MIN: 20.82 / MAX: 22.27
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.9.0 Binary: Pathtracer - Model: Asian Dragon Obj Linux 5.10.3 EPYC 7F52 5 10 15 20 25 SE +/- 0.03, N = 3 SE +/- 0.11, N = 3 20.42 20.41 MIN: 19.57 / MAX: 20.77 MIN: 19.42 / MAX: 20.8
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.9.0 Binary: Pathtracer ISPC - Model: Asian Dragon EPYC 7F52 Linux 5.10.3 5 10 15 20 25 SE +/- 0.20, N = 6 SE +/- 0.03, N = 3 21.18 20.89 MIN: 20.68 / MAX: 22.95 MIN: 20.71 / MAX: 22.1
OpenBenchmarking.org Frames Per Second, More Is Better Embree 3.9.0 Binary: Pathtracer ISPC - Model: Asian Dragon Obj EPYC 7F52 Linux 5.10.3 5 10 15 20 25 SE +/- 0.03, N = 3 SE +/- 0.04, N = 3 19.64 19.59 MIN: 18.92 / MAX: 19.94 MIN: 18.78 / MAX: 19.96
OpenVKL OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 0.9 Benchmark: vklBenchmark Linux 5.10.3 EPYC 7F52 50 100 150 200 250 SE +/- 0.38, N = 3 SE +/- 0.60, N = 3 218.94 217.81 MIN: 1 / MAX: 772 MIN: 1 / MAX: 765
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 0.9 Benchmark: vklBenchmarkVdbVolume Linux 5.10.3 EPYC 7F52 3M 6M 9M 12M 15M SE +/- 19338.94, N = 3 SE +/- 100190.60, N = 3 16277364.24 15263784.67 MIN: 790262 / MAX: 65640384 MIN: 798247 / MAX: 56683584
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 0.9 Benchmark: vklBenchmarkStructuredVolume Linux 5.10.3 EPYC 7F52 15M 30M 45M 60M 75M SE +/- 168051.69, N = 3 SE +/- 788054.78, N = 3 72087200.32 68692259.88 MIN: 921866 / MAX: 575712792 MIN: 909007 / MAX: 535870728
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 0.9 Benchmark: vklBenchmarkUnstructuredVolume EPYC 7F52 Linux 5.10.3 400K 800K 1200K 1600K 2000K SE +/- 2560.46, N = 3 SE +/- 1848.12, N = 3 1818093.89 1817665.47 MIN: 19110 / MAX: 6113054 MIN: 19297 / MAX: 6122055
OpenBenchmarking.org M samples/sec, More Is Better LuxCoreRender 2.3 Scene: Rainbow Colors and Prism Linux 5.10.3 EPYC 7F52 0.7875 1.575 2.3625 3.15 3.9375 SE +/- 0.01, N = 3 SE +/- 0.01, N = 3 3.50 3.49 MIN: 3.43 / MAX: 3.52 MIN: 3.42 / MAX: 3.52
YafaRay YafaRay is an open-source physically based montecarlo ray-tracing engine. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better YafaRay 3.4.1 Total Time For Sample Scene EPYC 7F52 Linux 5.10.3 30 60 90 120 150 SE +/- 0.76, N = 3 SE +/- 0.50, N = 3 130.10 130.91 1. (CXX) g++ options: -std=c++11 -O3 -ffast-math -rdynamic -ldl -lImath -lIlmImf -lIex -lHalf -lz -lIlmThread -lxml2 -lfreetype -lpthread
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. This test measures the RSA 4096-bit performance of OpenSSL. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Signs Per Second, More Is Better OpenSSL 1.1.1 RSA 4096-bit Performance Linux 5.10.3 EPYC 7F52 1000 2000 3000 4000 5000 SE +/- 0.71, N = 3 SE +/- 0.76, N = 3 4579.8 4571.4 1. (CC) gcc options: -pthread -m64 -O3 -lssl -lcrypto -ldl
PHPBench PHPBench is a benchmark suite for PHP. It performs a large number of simple tests in order to bench various aspects of the PHP interpreter. PHPBench can be used to compare hardware, operating systems, PHP versions, PHP accelerators and caches, compiler options, etc. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Score, More Is Better PHPBench 0.8.1 PHP Benchmark Suite Linux 5.10.3 EPYC 7F52 130K 260K 390K 520K 650K SE +/- 627.03, N = 3 SE +/- 1384.96, N = 3 625441 618552
InfluxDB This is a benchmark of the InfluxDB open-source time-series database optimized for fast, high-availability storage for IoT and other use-cases. The InfluxDB test profile makes use of InfluxDB Inch for facilitating the benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 EPYC 7F52 Linux 5.10.3 300K 600K 900K 1200K 1500K SE +/- 1586.66, N = 3 SE +/- 1744.82, N = 3 1211752.0 1198820.3
OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 EPYC 7F52 Linux 5.10.3 300K 600K 900K 1200K 1500K SE +/- 1494.65, N = 3 SE +/- 2047.76, N = 3 1425736.1 1419536.4
KeyDB A benchmark of KeyDB as a multi-threaded fork of the Redis server. The KeyDB benchmark is conducted using memtier-benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better KeyDB 6.0.16 EPYC 7F52 Linux 5.10.3 90K 180K 270K 360K 450K SE +/- 3437.31, N = 3 SE +/- 1060.81, N = 3 432105.73 424609.60 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Requests Per Second, More Is Better Redis 6.0.9 Test: SADD EPYC 7F52 Linux 5.10.3 300K 600K 900K 1200K 1500K SE +/- 16740.24, N = 3 SE +/- 11341.57, N = 3 1565518.88 1503178.00 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 6.0.9 Test: LPUSH EPYC 7F52 Linux 5.10.3 300K 600K 900K 1200K 1500K SE +/- 14085.60, N = 3 SE +/- 11678.97, N = 6 1216222.50 1174489.56 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 6.0.9 Test: GET EPYC 7F52 Linux 5.10.3 400K 800K 1200K 1600K 2000K SE +/- 22488.85, N = 15 SE +/- 18986.94, N = 15 1753884.37 1630009.81 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 6.0.9 Test: SET EPYC 7F52 Linux 5.10.3 300K 600K 900K 1200K 1500K SE +/- 15975.34, N = 15 SE +/- 10427.33, N = 15 1350619.52 1323358.98 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 1 - Mode: Read Only - Average Latency EPYC 7F52 Linux 5.10.3 0.0081 0.0162 0.0243 0.0324 0.0405 SE +/- 0.000, N = 3 SE +/- 0.000, N = 3 0.035 0.036 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 1 - Mode: Read Write EPYC 7F52 Linux 5.10.3 800 1600 2400 3200 4000 SE +/- 24.75, N = 3 SE +/- 16.52, N = 3 3803 3782 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 1 - Mode: Read Write - Average Latency EPYC 7F52 Linux 5.10.3 0.0594 0.1188 0.1782 0.2376 0.297 SE +/- 0.002, N = 3 SE +/- 0.001, N = 3 0.263 0.264 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 50 - Mode: Read Only Linux 5.10.3 EPYC 7F52 110K 220K 330K 440K 550K SE +/- 7128.53, N = 15 SE +/- 5689.06, N = 3 501047 491827 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 50 - Mode: Read Only - Average Latency Linux 5.10.3 EPYC 7F52 0.023 0.046 0.069 0.092 0.115 SE +/- 0.001, N = 15 SE +/- 0.001, N = 3 0.100 0.102 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 100 - Mode: Read Only EPYC 7F52 Linux 5.10.3 110K 220K 330K 440K 550K SE +/- 3559.12, N = 3 SE +/- 1764.57, N = 3 514307 507161 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 100 - Mode: Read Only - Average Latency EPYC 7F52 Linux 5.10.3 0.0446 0.0892 0.1338 0.1784 0.223 SE +/- 0.001, N = 3 SE +/- 0.001, N = 3 0.195 0.198 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 250 - Mode: Read Only EPYC 7F52 Linux 5.10.3 120K 240K 360K 480K 600K SE +/- 733.28, N = 3 SE +/- 1662.47, N = 3 556825 536412 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 250 - Mode: Read Only - Average Latency EPYC 7F52 Linux 5.10.3 0.1051 0.2102 0.3153 0.4204 0.5255 SE +/- 0.000, N = 3 SE +/- 0.001, N = 3 0.449 0.467 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 50 - Mode: Read Write EPYC 7F52 Linux 5.10.3 900 1800 2700 3600 4500 SE +/- 4.77, N = 3 SE +/- 1.61, N = 3 4231 4151 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 50 - Mode: Read Write - Average Latency EPYC 7F52 Linux 5.10.3 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 11.82 12.05 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 100 - Mode: Read Write EPYC 7F52 Linux 5.10.3 700 1400 2100 2800 3500 SE +/- 24.88, N = 3 SE +/- 35.07, N = 3 3332 3300 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 100 - Mode: Read Write - Average Latency EPYC 7F52 Linux 5.10.3 7 14 21 28 35 SE +/- 0.22, N = 3 SE +/- 0.32, N = 3 30.04 30.34 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 250 - Mode: Read Write EPYC 7F52 Linux 5.10.3 500 1000 1500 2000 2500 SE +/- 27.65, N = 15 SE +/- 17.17, N = 15 2227 2201 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL pgbench 13.0 Scaling Factor: 1 - Clients: 250 - Mode: Read Write - Average Latency EPYC 7F52 Linux 5.10.3 30 60 90 120 150 SE +/- 1.44, N = 15 SE +/- 0.89, N = 15 112.58 113.78 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lpthread -lrt -ldl -lm
Node.js V8 Web Tooling Benchmark Running the V8 project's Web-Tooling-Benchmark under Node.js. The Web-Tooling-Benchmark stresses JavaScript-related workloads common to web developers like Babel and TypeScript and Babylon. This test profile can test the system's JavaScript performance with Node.js. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org runs/s, More Is Better Node.js V8 Web Tooling Benchmark Linux 5.10.3 EPYC 7F52 3 6 9 12 15 SE +/- 0.08, N = 3 SE +/- 0.05, N = 3 9.35 9.27 1. Nodejs
v10.19.0
simdjson This is a benchmark of SIMDJSON, a high performance JSON parser. SIMDJSON aims to be the fastest JSON parser and is used by projects like Microsoft FishStore, Yandex ClickHouse, Shopify, and others. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GB/s, More Is Better simdjson 0.7.1 Throughput Test: Kostya Linux 5.10.3 EPYC 7F52 0.1193 0.2386 0.3579 0.4772 0.5965 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.53 0.52 1. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.org GB/s, More Is Better simdjson 0.7.1 Throughput Test: LargeRandom Linux 5.10.3 EPYC 7F52 0.0878 0.1756 0.2634 0.3512 0.439 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.39 0.38 1. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.org GB/s, More Is Better simdjson 0.7.1 Throughput Test: PartialTweets Linux 5.10.3 EPYC 7F52 0.1373 0.2746 0.4119 0.5492 0.6865 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.61 0.61 1. (CXX) g++ options: -O3 -pthread
OpenBenchmarking.org GB/s, More Is Better simdjson 0.7.1 Throughput Test: DistinctUserID Linux 5.10.3 EPYC 7F52 0.1395 0.279 0.4185 0.558 0.6975 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 0.62 0.62 1. (CXX) g++ options: -O3 -pthread
EPYC 7F52 Processor: AMD EPYC 7F52 16-Core @ 3.50GHz (16 Cores / 32 Threads), Motherboard: Supermicro H11DSi-NT v2.00 (2.1 BIOS), Chipset: AMD Starship/Matisse, Memory: 64GB, Disk: 280GB INTEL SSDPE21D280GA, Graphics: llvmpipe, Monitor: VE228, Network: 2 x Intel 10G X550T
OS: Ubuntu 20.04, Kernel: 5.8.0-050800rc6daily20200721-generic (x86_64) 20200720, Desktop: GNOME Shell 3.36.1, Display Server: X Server 1.20.8, Display Driver: modesetting 1.20.8, OpenGL: 3.3 Mesa 20.0.4 (LLVM 9.0.1 128 bits), Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1920x1080
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x8301034Java Notes: OpenJDK Runtime Environment (build 11.0.7+10-post-Ubuntu-3ubuntu1)Python Notes: Python 2.7.18rc1 + Python 3.8.2Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 27 December 2020 16:49 by user phoronix.
Linux 5.10.3 Processor: AMD EPYC 7F52 16-Core @ 3.50GHz (16 Cores / 32 Threads), Motherboard: Supermicro H11DSi-NT v2.00 (2.1 BIOS), Chipset: AMD Starship/Matisse, Memory: 64GB, Disk: 280GB INTEL SSDPE21D280GA, Graphics: llvmpipe, Monitor: VE228, Network: 2 x Intel 10G X550T
OS: Ubuntu 20.04, Kernel: 5.10.3-051003-generic (x86_64), Desktop: GNOME Shell 3.36.1, Display Server: X Server 1.20.8, Display Driver: modesetting 1.20.8, OpenGL: 3.3 Mesa 20.0.4 (LLVM 9.0.1 128 bits), Compiler: GCC 9.3.0, File-System: ext4, Screen Resolution: 1920x1080
Compiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq ondemand (Boost: Enabled) - CPU Microcode: 0x8301034Java Notes: OpenJDK Runtime Environment (build 11.0.7+10-post-Ubuntu-3ubuntu1)Python Notes: Python 2.7.18rc1 + Python 3.8.2Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 28 December 2020 16:17 by user phoronix.