AMD Ryzen 7 4800U testing with a ASRock 4X4-4000 (P1.30Q BIOS) and AMD Renoir 512MB on Ubuntu 22.04 via the Phoronix Test Suite.
A Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8600103Graphics Notes: BAR1 / Visible vRAM Size: 512 MB - vBIOS Version: 113-RENOIR-026Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
B C Processor: AMD Ryzen 7 4800U @ 1.80GHz (8 Cores / 16 Threads), Motherboard: ASRock 4X4-4000 (P1.30Q BIOS), Chipset: AMD Renoir/Cezanne, Memory: 16GB, Disk: 512GB TS512GMTS952T-I, Graphics: AMD Renoir 512MB (1750/400MHz), Audio: AMD Renoir Radeon HD Audio, Monitor: DELL P2415Q, Network: Realtek RTL8125 2.5GbE + Realtek RTL8111/8168/8411 + Intel 8265 / 8275
OS: Ubuntu 22.04, Kernel: 5.19.0-rc6-phx-retbleed (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.47), Vulkan: 1.3.204, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Unvanquished Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: High A B C 30 60 90 120 150 SE +/- 1.33, N = 15 139.1 137.8 143.8
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.3 Test: blosclz bitshuffle A B C 600 1200 1800 2400 3000 SE +/- 4.29, N = 3 2657.3 2664.1 2666.2 1. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
OpenFOAM OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: motorBike - Mesh Time A B C 20 40 60 80 100 81.71 82.13 80.67 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: motorBike - Execution Time A B C 100 200 300 400 500 460.35 462.05 461.84 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time A B C 20 40 60 80 100 106.19 106.19 106.78 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Execution Time A B C 300 600 900 1200 1500 1434.16 1440.02 1436.57 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Quality 100, Highest Compression A B C 0.7268 1.4536 2.1804 2.9072 3.634 SE +/- 0.03, N = 3 3.22 3.23 3.22 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
WebP2 Image Encode This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Default A B C 1.0868 2.1736 3.2604 4.3472 5.434 SE +/- 0.01, N = 3 4.83 4.74 4.77 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
srsRAN srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Samples / Second, More Is Better srsRAN 22.04.1 Test: OFDM_Test A B C 30M 60M 90M 120M 150M SE +/- 1325194.21, N = 15 128073333 119400000 121700000 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM A B C 60 120 180 240 300 SE +/- 1.48, N = 3 274.2 268.5 268.3 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM A B C 20 40 60 80 100 SE +/- 0.22, N = 3 106.2 106.2 105.5 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM A B C 80 160 240 320 400 SE +/- 0.85, N = 3 346.7 338.3 348.5 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM A B C 30 60 90 120 150 SE +/- 0.35, N = 3 151.5 148.2 151.7 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM A B C 70 140 210 280 350 SE +/- 0.66, N = 3 301.7 301.7 300.0 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM A B C 30 60 90 120 150 SE +/- 0.21, N = 3 114.0 114.1 114.4 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM A B C 80 160 240 320 400 SE +/- 2.89, N = 3 375.5 382.2 372.8 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM A B C 40 80 120 160 200 SE +/- 1.39, N = 3 160.4 162.9 159.1 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM A B C 20 40 60 80 100 SE +/- 0.45, N = 3 98.4 97.5 96.7 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM A B C 12 24 36 48 60 SE +/- 0.10, N = 3 54.7 53.8 53.5 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
GraphicsMagick This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Swirl A B C 80 160 240 320 400 SE +/- 4.33, N = 3 355 349 346 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Rotate A B C 110 220 330 440 550 SE +/- 0.67, N = 3 496 521 516 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Sharpen A B C 20 40 60 80 100 SE +/- 0.88, N = 3 102 101 101 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Enhanced A B C 40 80 120 160 200 SE +/- 1.20, N = 3 159 160 160 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Resizing A B C 150 300 450 600 750 SE +/- 3.18, N = 3 689 712 712 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Noise-Gaussian A B C 40 80 120 160 200 SE +/- 0.33, N = 3 188 189 191 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: HWB Color Space A B C 130 260 390 520 650 SE +/- 0.88, N = 3 565 617 605 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
QuadRay VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 1 - Resolution: 4K A B C 1.0058 2.0116 3.0174 4.0232 5.029 SE +/- 0.01, N = 3 4.44 4.47 4.47 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 2 - Resolution: 4K A B C 0.306 0.612 0.918 1.224 1.53 SE +/- 0.00, N = 3 1.34 1.36 1.36 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 3 - Resolution: 4K A B C 0.27 0.54 0.81 1.08 1.35 SE +/- 0.00, N = 3 1.19 1.19 1.20 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 5 - Resolution: 4K A B C 0.0743 0.1486 0.2229 0.2972 0.3715 SE +/- 0.00, N = 3 0.32 0.33 0.33 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 1 - Resolution: 1080p A B C 4 8 12 16 20 SE +/- 0.10, N = 3 13.16 16.81 13.32 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 2 - Resolution: 1080p A B C 1.1858 2.3716 3.5574 4.7432 5.929 SE +/- 0.05, N = 15 5.07 5.27 5.06 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 3 - Resolution: 1080p A B C 1.0553 2.1106 3.1659 4.2212 5.2765 SE +/- 0.05, N = 3 4.37 4.69 4.68 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 5 - Resolution: 1080p A B C 0.2948 0.5896 0.8844 1.1792 1.474 SE +/- 0.00, N = 3 1.27 1.30 1.31 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K A B C 0.6795 1.359 2.0385 2.718 3.3975 SE +/- 0.01, N = 3 2.97 3.02 2.99 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K A B C 3 6 9 12 15 SE +/- 0.03, N = 3 12.42 12.82 12.22 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K A B C 1.0485 2.097 3.1455 4.194 5.2425 SE +/- 0.01, N = 3 4.61 4.66 4.65 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K A B C 4 8 12 16 20 SE +/- 0.01, N = 3 15.97 16.39 16.39 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K A B C 5 10 15 20 25 SE +/- 0.03, N = 3 20.67 21.25 21.11 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K A B C 5 10 15 20 25 SE +/- 0.00, N = 3 20.77 21.25 21.12 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p A B C 0.0833 0.1666 0.2499 0.3332 0.4165 SE +/- 0.00, N = 3 0.36 0.37 0.37 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p A B C 2 4 6 8 10 SE +/- 0.07, N = 3 7.14 7.49 7.41 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p A B C 5 10 15 20 25 SE +/- 0.14, N = 3 21.78 22.79 21.42 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p A B C 4 8 12 16 20 SE +/- 0.02, N = 3 14.04 14.17 14.09 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p A B C 10 20 30 40 50 SE +/- 0.15, N = 3 43.11 43.21 43.18 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p A B C 12 24 36 48 60 SE +/- 0.03, N = 3 50.92 51.06 51.15 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p A B C 12 24 36 48 60 SE +/- 0.05, N = 3 51.79 51.47 51.85 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
SVT-AV1 OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 4 - Input: Bosphorus 4K A B C 0.2205 0.441 0.6615 0.882 1.1025 SE +/- 0.002, N = 3 0.972 0.978 0.980 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 8 - Input: Bosphorus 4K A B C 5 10 15 20 25 SE +/- 0.19, N = 3 18.48 18.25 18.34 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 10 - Input: Bosphorus 4K A B C 8 16 24 32 40 SE +/- 0.13, N = 3 33.66 29.81 30.13 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 12 - Input: Bosphorus 4K A B C 10 20 30 40 50 SE +/- 0.07, N = 3 44.00 45.10 45.12 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 4 - Input: Bosphorus 1080p A B C 0.7076 1.4152 2.1228 2.8304 3.538 SE +/- 0.008, N = 3 3.106 3.125 3.145 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 8 - Input: Bosphorus 1080p A B C 12 24 36 48 60 SE +/- 0.13, N = 3 50.24 50.38 51.11 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 10 - Input: Bosphorus 1080p A B C 20 40 60 80 100 SE +/- 0.25, N = 3 108.08 110.72 110.41 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 12 - Input: Bosphorus 1080p A B C 40 80 120 160 200 SE +/- 0.65, N = 3 165.90 169.27 171.42 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
oneDNN OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU A B C 4 8 12 16 20 SE +/- 0.01, N = 3 14.28 14.19 14.28 MIN: 13.95 MIN: 13.89 MIN: 13.99 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU A B C 5 10 15 20 25 SE +/- 0.02, N = 3 20.90 20.38 19.69 MIN: 19.26 MIN: 18.83 MIN: 18.44 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU A B C 0.7465 1.493 2.2395 2.986 3.7325 SE +/- 0.00316, N = 3 3.31774 3.24690 3.23184 MIN: 3.06 MIN: 3.03 MIN: 3.04 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU A B C 1.0833 2.1666 3.2499 4.3332 5.4165 SE +/- 0.00172, N = 3 4.80714 4.81270 4.81470 MIN: 4.69 MIN: 4.71 MIN: 4.7 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU
A: The test run did not produce a result.
B: The test run did not produce a result.
C: The test run did not produce a result.
Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU
A: The test run did not produce a result.
B: The test run did not produce a result.
C: The test run did not produce a result.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU A B C 12 24 36 48 60 SE +/- 0.01, N = 3 51.34 51.36 51.35 MIN: 50.69 MIN: 50.7 MIN: 50.16 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU A B C 3 6 9 12 15 SE +/- 0.08, N = 12 10.76 11.46 10.80 MIN: 7.8 MIN: 7.85 MIN: 7.93 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU A B C 3 6 9 12 15 SE +/- 0.02024, N = 3 9.63379 9.54653 9.63536 MIN: 9.05 MIN: 9.11 MIN: 9.21 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU A B C 11 22 33 44 55 SE +/- 0.08, N = 3 48.59 48.52 48.42 MIN: 47.93 MIN: 48.04 MIN: 48.01 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU A B C 1.0606 2.1212 3.1818 4.2424 5.303 SE +/- 0.01121, N = 3 4.71385 4.64717 4.63135 MIN: 4.27 MIN: 4.31 MIN: 4.24 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU A B C 1.3321 2.6642 3.9963 5.3284 6.6605 SE +/- 0.01821, N = 3 5.92046 5.83337 5.79261 MIN: 5.22 MIN: 5.16 MIN: 5.3 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU A B C 2K 4K 6K 8K 10K SE +/- 7.23, N = 3 8686.14 8629.97 8640.73 MIN: 8636.96 MIN: 8579.92 MIN: 8601.17 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU A B C 2K 4K 6K 8K 10K SE +/- 6.19, N = 3 8051.80 7975.32 7944.16 MIN: 8018.11 MIN: 7952.97 MIN: 7909.95 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU A B C 2K 4K 6K 8K 10K SE +/- 5.30, N = 3 8637.51 8686.17 8614.66 MIN: 8594.99 MIN: 8651.11 MIN: 8579.22 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU
A: The test run did not produce a result.
B: The test run did not produce a result.
C: The test run did not produce a result.
Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU
A: The test run did not produce a result.
B: The test run did not produce a result.
C: The test run did not produce a result.
Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU
A: The test run did not produce a result.
B: The test run did not produce a result.
C: The test run did not produce a result.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU A B C 2K 4K 6K 8K 10K SE +/- 10.30, N = 3 8025.41 7918.33 7884.07 MIN: 7981.52 MIN: 7896.96 MIN: 7871.32 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU A B C 3 6 9 12 15 SE +/- 0.00123, N = 3 9.70453 9.71677 9.72365 MIN: 9.54 MIN: 9.57 MIN: 9.57 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU A B C 2K 4K 6K 8K 10K SE +/- 22.45, N = 3 8630.92 8592.08 8586.80 MIN: 8567.27 MIN: 8562.35 MIN: 8549.18 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU A B C 2K 4K 6K 8K 10K SE +/- 10.88, N = 3 8063.85 7887.60 7857.71 MIN: 8026.67 MIN: 7865.46 MIN: 7839.69 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU A B C 2 4 6 8 10 SE +/- 0.00466, N = 3 6.79016 6.78025 6.79224 MIN: 6.34 MIN: 6.4 MIN: 6.32 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU
A: The test run did not produce a result.
B: The test run did not produce a result.
C: The test run did not produce a result.
Timed Wasmer Compilation This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Wasmer Compilation 2.3 Time To Compile A B C 30 60 90 120 150 SE +/- 0.22, N = 3 126.24 126.41 125.92 1. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs
ClickHouse ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, First Run / Cold Cache A B C 12 24 36 48 60 SE +/- 0.51, N = 9 51.35 48.86 47.73 MIN: 3.39 / MAX: 8571.43 MIN: 3.81 / MAX: 4615.38 MIN: 3.78 / MAX: 5454.55 1. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Second Run A B C 13 26 39 52 65 SE +/- 0.62, N = 9 56.13 55.21 55.91 MIN: 3.7 / MAX: 15000 MIN: 3.88 / MAX: 2608.7 MIN: 3.87 / MAX: 4285.71 1. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Third Run A B C 13 26 39 52 65 SE +/- 0.19, N = 9 57.57 54.60 58.84 MIN: 3.7 / MAX: 15000 MIN: 3.91 / MAX: 2857.14 MIN: 3.83 / MAX: 7500 1. ClickHouse server version 22.5.4.19 (official build).
TensorFlow This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: AlexNet A B C 6 12 18 24 30 SE +/- 0.02, N = 3 23.71 23.83 23.84
spaCy The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org tokens/sec, More Is Better spaCy 3.4.1 Model: en_core_web_lg A B C 2K 4K 6K 8K 10K SE +/- 60.05, N = 3 10169 10042 10023
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: nasnet A B C 4 8 12 16 20 SE +/- 0.09, N = 3 17.57 16.97 17.30 MIN: 17.11 / MAX: 32.86 MIN: 16.65 / MAX: 22.65 MIN: 17.04 / MAX: 32.64 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenetV3 A B C 0.5911 1.1822 1.7733 2.3644 2.9555 SE +/- 0.027, N = 3 2.627 2.559 2.523 MIN: 2.51 / MAX: 4.18 MIN: 2.48 / MAX: 5.23 MIN: 2.46 / MAX: 3.45 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: squeezenetv1.1 A B C 1.1484 2.2968 3.4452 4.5936 5.742 SE +/- 0.085, N = 3 5.104 4.873 4.786 MIN: 4.83 / MAX: 17.44 MIN: 4.73 / MAX: 6.22 MIN: 4.66 / MAX: 6.44 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: resnet-v2-50 A B C 10 20 30 40 50 SE +/- 0.35, N = 3 43.68 41.91 42.39 MIN: 42.3 / MAX: 59.3 MIN: 41.13 / MAX: 89.56 MIN: 41.67 / MAX: 57.54 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: SqueezeNetV1.0 A B C 3 6 9 12 15 SE +/- 0.11, N = 3 10.58 10.26 10.28 MIN: 10.02 / MAX: 25.16 MIN: 9.92 / MAX: 11.57 MIN: 9.81 / MAX: 16.54 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: MobileNetV2_224 A B C 1.2922 2.5844 3.8766 5.1688 6.461 SE +/- 0.031, N = 3 5.743 5.574 5.519 MIN: 5.48 / MAX: 21.23 MIN: 5.39 / MAX: 11.74 MIN: 5.31 / MAX: 6.72 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 A B C 1.0883 2.1766 3.2649 4.3532 5.4415 SE +/- 0.027, N = 3 4.837 4.735 4.756 MIN: 4.62 / MAX: 5.59 MIN: 4.55 / MAX: 19.53 MIN: 4.56 / MAX: 5.55 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: inception-v3 A B C 13 26 39 52 65 SE +/- 0.22, N = 3 55.86 54.76 54.14 MIN: 54.66 / MAX: 164.93 MIN: 53.74 / MAX: 114.32 MIN: 53.36 / MAX: 68.97 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mobilenet A B C 8 16 24 32 40 SE +/- 0.06, N = 3 32.91 33.81 32.79 MIN: 32.07 / MAX: 48.87 MIN: 33.19 / MAX: 34.81 MIN: 31.99 / MAX: 34.11 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v2-v2 - Model: mobilenet-v2 A B C 3 6 9 12 15 SE +/- 0.01, N = 3 10.58 10.56 10.53 MIN: 9.93 / MAX: 11.96 MIN: 9.99 / MAX: 13.06 MIN: 10.03 / MAX: 12.39 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v3-v3 - Model: mobilenet-v3 A B C 2 4 6 8 10 SE +/- 0.00, N = 3 8.33 8.43 8.48 MIN: 7.91 / MAX: 12.1 MIN: 7.91 / MAX: 9.75 MIN: 8.01 / MAX: 9.92 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: shufflenet-v2 A B C 1.287 2.574 3.861 5.148 6.435 SE +/- 0.01, N = 3 5.72 5.64 5.71 MIN: 5.34 / MAX: 6.86 MIN: 5.4 / MAX: 6.54 MIN: 5.41 / MAX: 6.71 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mnasnet A B C 2 4 6 8 10 SE +/- 0.01, N = 3 6.98 6.93 6.90 MIN: 6.64 / MAX: 8.35 MIN: 6.68 / MAX: 8.21 MIN: 6.7 / MAX: 8.29 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: efficientnet-b0 A B C 4 8 12 16 20 SE +/- 0.00, N = 3 13.67 13.67 13.57 MIN: 12.95 / MAX: 15.62 MIN: 12.95 / MAX: 15.45 MIN: 12.93 / MAX: 15.25 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: blazeface A B C 0.5468 1.0936 1.6404 2.1872 2.734 SE +/- 0.00, N = 3 2.40 2.40 2.43 MIN: 2.31 / MAX: 3.34 MIN: 2.32 / MAX: 3.22 MIN: 2.35 / MAX: 3.11 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: googlenet A B C 7 14 21 28 35 SE +/- 0.05, N = 3 30.39 30.35 30.36 MIN: 29.53 / MAX: 63.12 MIN: 29.67 / MAX: 31.69 MIN: 29.66 / MAX: 31.93 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vgg16 A B C 30 60 90 120 150 SE +/- 0.10, N = 3 123.21 123.30 123.54 MIN: 122.18 / MAX: 128.35 MIN: 122.27 / MAX: 131.77 MIN: 122.7 / MAX: 165.33 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet18 A B C 6 12 18 24 30 SE +/- 0.09, N = 3 24.42 24.47 24.39 MIN: 23.93 / MAX: 47.11 MIN: 24.12 / MAX: 25.52 MIN: 24.07 / MAX: 26.05 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: alexnet A B C 4 8 12 16 20 SE +/- 0.05, N = 3 16.82 16.90 16.84 MIN: 16.41 / MAX: 18.04 MIN: 16.54 / MAX: 17.43 MIN: 16.47 / MAX: 17.4 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet50 A B C 11 22 33 44 55 SE +/- 0.10, N = 3 49.04 50.29 49.11 MIN: 48.32 / MAX: 50.3 MIN: 49.25 / MAX: 92.13 MIN: 48.61 / MAX: 49.99 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: yolov4-tiny A B C 13 26 39 52 65 SE +/- 0.24, N = 3 53.93 56.11 53.68 MIN: 52.76 / MAX: 59.52 MIN: 55.41 / MAX: 67.95 MIN: 52.89 / MAX: 54.49 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: squeezenet_ssd A B C 9 18 27 36 45 SE +/- 0.16, N = 3 40.60 40.68 40.25 MIN: 39.28 / MAX: 100.58 MIN: 39.81 / MAX: 41.84 MIN: 39.04 / MAX: 41.52 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: regnety_400m A B C 4 8 12 16 20 SE +/- 0.03, N = 3 15.88 15.90 15.85 MIN: 15.38 / MAX: 17.56 MIN: 15.44 / MAX: 25.7 MIN: 15.49 / MAX: 17.17 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vision_transformer A B C 60 120 180 240 300 SE +/- 0.84, N = 3 277.23 278.88 273.34 MIN: 272.75 / MAX: 341.78 MIN: 276.41 / MAX: 290.32 MIN: 270.15 / MAX: 281.46 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: FastestDet A B C 2 4 6 8 10 SE +/- 0.07, N = 3 6.99 6.87 6.93 MIN: 6.62 / MAX: 7.84 MIN: 6.69 / MAX: 7.26 MIN: 6.72 / MAX: 10.54 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: mobilenet A B C 4 8 12 16 20 SE +/- 0.89, N = 12 15.69 14.54 14.77 MIN: 13.68 / MAX: 34.77 MIN: 13.65 / MAX: 16.99 MIN: 13.84 / MAX: 30.44 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 A B C 2 4 6 8 10 SE +/- 0.06, N = 12 5.90 6.04 6.06 MIN: 4.78 / MAX: 7.26 MIN: 5.43 / MAX: 6.83 MIN: 5.47 / MAX: 7.02 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 A B C 2 4 6 8 10 SE +/- 0.04, N = 12 6.41 6.30 6.60 MIN: 5.42 / MAX: 7.73 MIN: 5.78 / MAX: 7.35 MIN: 5.59 / MAX: 7.35 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: shufflenet-v2 A B C 1.0643 2.1286 3.1929 4.2572 5.3215 SE +/- 0.07, N = 12 4.52 4.45 4.73 MIN: 3.47 / MAX: 5.95 MIN: 3.68 / MAX: 5.47 MIN: 3.71 / MAX: 5.77 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: mnasnet A B C 1.3455 2.691 4.0365 5.382 6.7275 SE +/- 0.02, N = 12 5.89 5.90 5.98 MIN: 4.8 / MAX: 7.01 MIN: 5.24 / MAX: 6.51 MIN: 5.06 / MAX: 6.96 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: efficientnet-b0 A B C 3 6 9 12 15 SE +/- 0.03, N = 12 13.11 13.03 13.03 MIN: 11.9 / MAX: 14.23 MIN: 12.1 / MAX: 13.9 MIN: 11.94 / MAX: 13.96 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: blazeface A B C 0.351 0.702 1.053 1.404 1.755 SE +/- 0.00, N = 12 1.56 1.55 1.55 MIN: 1.49 / MAX: 2.32 MIN: 1.49 / MAX: 2.35 MIN: 1.5 / MAX: 2.11 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: googlenet A B C 3 6 9 12 15 SE +/- 0.05, N = 12 11.50 11.52 11.42 MIN: 10.33 / MAX: 12.79 MIN: 10.69 / MAX: 12.47 MIN: 10.68 / MAX: 12.44 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: vgg16 A B C 9 18 27 36 45 SE +/- 0.02, N = 12 40.67 40.63 40.72 MIN: 39.9 / MAX: 42.18 MIN: 39.96 / MAX: 41.46 MIN: 40.3 / MAX: 41.82 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: resnet18 A B C 3 6 9 12 15 SE +/- 0.03, N = 12 9.16 9.26 9.37 MIN: 8.38 / MAX: 10.82 MIN: 8.36 / MAX: 10.21 MIN: 8.44 / MAX: 10.4 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: alexnet A B C 3 6 9 12 15 SE +/- 0.02, N = 12 11.33 11.42 11.36 MIN: 10.56 / MAX: 12.69 MIN: 10.6 / MAX: 12.34 MIN: 10.62 / MAX: 12.35 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: resnet50 A B C 5 10 15 20 25 SE +/- 0.04, N = 12 18.16 18.23 18.29 MIN: 17.15 / MAX: 19.52 MIN: 17.32 / MAX: 19.43 MIN: 17.37 / MAX: 19.04 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: yolov4-tiny A B C 5 10 15 20 25 SE +/- 0.62, N = 12 21.99 19.91 22.26 MIN: 18.94 / MAX: 43.4 MIN: 18.96 / MAX: 29.39 MIN: 18.95 / MAX: 35.32 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: squeezenet_ssd A B C 3 6 9 12 15 SE +/- 0.02, N = 12 11.47 11.46 11.49 MIN: 10.38 / MAX: 28.65 MIN: 10.74 / MAX: 22.86 MIN: 10.67 / MAX: 14.52 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: regnety_400m A B C 2 4 6 8 10 SE +/- 0.05, N = 12 7.74 7.97 7.83 MIN: 6.16 / MAX: 8.79 MIN: 7.32 / MAX: 8.8 MIN: 6.68 / MAX: 8.65 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: vision_transformer A B C 110 220 330 440 550 SE +/- 1.50, N = 12 495.39 496.55 485.92 MIN: 454.64 / MAX: 937.46 MIN: 469.85 / MAX: 519.55 MIN: 459.84 / MAX: 511.86 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: FastestDet A B C 1.2623 2.5246 3.7869 5.0492 6.3115 SE +/- 0.07, N = 12 5.16 5.13 5.61 MIN: 3.72 / MAX: 6.4 MIN: 3.75 / MAX: 6.05 MIN: 4.66 / MAX: 6.65 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU A B C 0.2993 0.5986 0.8979 1.1972 1.4965 SE +/- 0.00, N = 3 1.33 1.32 1.32 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU A B C 600 1200 1800 2400 3000 SE +/- 1.33, N = 3 3002.50 3014.86 3015.78 MIN: 2859.06 / MAX: 3143.06 MIN: 2920.66 / MAX: 3180.59 MIN: 2912.64 / MAX: 3118.77 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU A B C 0.1958 0.3916 0.5874 0.7832 0.979 SE +/- 0.00, N = 3 0.87 0.86 0.87 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU A B C 1000 2000 3000 4000 5000 SE +/- 23.10, N = 3 4534.61 4573.79 4489.79 MIN: 3707.14 / MAX: 5216.88 MIN: 3896.02 / MAX: 5195.82 MIN: 3799.84 / MAX: 5229.44 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU A B C 0.1958 0.3916 0.5874 0.7832 0.979 SE +/- 0.00, N = 3 0.86 0.85 0.87 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU A B C 1000 2000 3000 4000 5000 SE +/- 13.58, N = 3 4596.84 4638.17 4537.87 MIN: 3807.31 / MAX: 5161.95 MIN: 3951.69 / MAX: 5196.3 MIN: 3774.07 / MAX: 5226.26 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU A B C 16 32 48 64 80 SE +/- 0.23, N = 3 73.97 72.24 74.06 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU A B C 12 24 36 48 60 SE +/- 0.17, N = 3 53.97 55.27 53.90 MIN: 26.23 / MAX: 79.93 MIN: 33.76 / MAX: 82.8 MIN: 30.6 / MAX: 87.57 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU A B C 0.4748 0.9496 1.4244 1.8992 2.374 SE +/- 0.00, N = 3 2.09 2.10 2.11 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU A B C 400 800 1200 1600 2000 SE +/- 2.42, N = 3 1912.86 1902.89 1890.83 MIN: 1858.14 / MAX: 1948.83 MIN: 1854.25 / MAX: 1946.21 MIN: 1836.96 / MAX: 1913.63 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU A B C 30 60 90 120 150 SE +/- 0.44, N = 3 143.31 144.27 145.10 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU A B C 7 14 21 28 35 SE +/- 0.08, N = 3 27.87 27.68 27.52 MIN: 23.08 / MAX: 56.78 MIN: 22.72 / MAX: 56.8 MIN: 23.18 / MAX: 58.55 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU A B C 40 80 120 160 200 SE +/- 0.18, N = 3 157.66 159.45 158.87 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU A B C 6 12 18 24 30 SE +/- 0.03, N = 3 25.33 25.05 25.14 MIN: 20.28 / MAX: 48.91 MIN: 22.53 / MAX: 48.72 MIN: 23.44 / MAX: 48.41 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU A B C 3 6 9 12 15 SE +/- 0.05, N = 3 13.53 13.25 13.35 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU A B C 70 140 210 280 350 SE +/- 1.05, N = 3 295.20 301.38 299.12 MIN: 206.56 / MAX: 367 MIN: 213.65 / MAX: 322.07 MIN: 209.85 / MAX: 325.38 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU A B C 50 100 150 200 250 SE +/- 0.12, N = 3 207.54 210.50 210.23 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU A B C 9 18 27 36 45 SE +/- 0.02, N = 3 38.50 37.94 38.00 MIN: 31.49 / MAX: 77.17 MIN: 29.14 / MAX: 75.36 MIN: 30.09 / MAX: 74.11 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU A B C 40 80 120 160 200 SE +/- 1.36, N = 3 164.98 170.12 166.95 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU A B C 6 12 18 24 30 SE +/- 0.20, N = 3 24.20 23.47 23.91 MIN: 16.28 / MAX: 46.34 MIN: 19.71 / MAX: 47.02 MIN: 17.22 / MAX: 43.65 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU A B C 600 1200 1800 2400 3000 SE +/- 4.92, N = 3 2727.54 2749.47 2747.24 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU A B C 0.648 1.296 1.944 2.592 3.24 SE +/- 0.01, N = 3 2.88 2.85 2.86 MIN: 1.82 / MAX: 29.13 MIN: 1.94 / MAX: 7.85 MIN: 1.79 / MAX: 15.09 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU A B C 800 1600 2400 3200 4000 SE +/- 17.35, N = 3 3641.73 3684.90 3637.99 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU A B C 0.477 0.954 1.431 1.908 2.385 SE +/- 0.01, N = 3 2.12 2.10 2.12 MIN: 1.06 / MAX: 26.87 MIN: 1.26 / MAX: 4.29 MIN: 1.09 / MAX: 5.35 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
Facebook RocksDB OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Fill A B C 120K 240K 360K 480K 600K SE +/- 1564.28, N = 3 549392 551829 551546 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Read A B C 6M 12M 18M 24M 30M SE +/- 268758.24, N = 5 25572231 26338023 26126775 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Update Random A B C 60K 120K 180K 240K 300K SE +/- 531.44, N = 3 291471 292449 294446 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Sequential Fill A B C 150K 300K 450K 600K 750K SE +/- 2562.12, N = 3 695239 687072 703946 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Fill Sync A B C 600 1200 1800 2400 3000 SE +/- 6.11, N = 3 2662 2653 2687 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read While Writing A B C 200K 400K 600K 800K 1000K SE +/- 5405.15, N = 3 1101434 1118927 1133798 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read Random Write Random A B C 200K 400K 600K 800K 1000K SE +/- 1407.09, N = 3 824795 824645 827539 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
BRL-CAD BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.32.6 VGR Performance Metric A B C 20K 40K 60K 80K 100K 90251 90606 90523 1. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -ldl -lm
A Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8600103Graphics Notes: BAR1 / Visible vRAM Size: 512 MB - vBIOS Version: 113-RENOIR-026Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 9 October 2022 18:53 by user phoronix.
B Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8600103Graphics Notes: BAR1 / Visible vRAM Size: 512 MB - vBIOS Version: 113-RENOIR-026Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 10 October 2022 08:48 by user phoronix.
C Processor: AMD Ryzen 7 4800U @ 1.80GHz (8 Cores / 16 Threads), Motherboard: ASRock 4X4-4000 (P1.30Q BIOS), Chipset: AMD Renoir/Cezanne, Memory: 16GB, Disk: 512GB TS512GMTS952T-I, Graphics: AMD Renoir 512MB (1750/400MHz), Audio: AMD Renoir Radeon HD Audio, Monitor: DELL P2415Q, Network: Realtek RTL8125 2.5GbE + Realtek RTL8111/8168/8411 + Intel 8265 / 8275
OS: Ubuntu 22.04, Kernel: 5.19.0-rc6-phx-retbleed (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.47), Vulkan: 1.3.204, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8600103Graphics Notes: BAR1 / Visible vRAM Size: 512 MB - vBIOS Version: 113-RENOIR-026Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 10 October 2022 13:35 by user phoronix.