AMD Ryzen 7 4800U testing with a ASRock 4X4-4000 (P1.30Q BIOS) and AMD Renoir 512MB on Ubuntu 22.04 via the Phoronix Test Suite.
A Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8600103Graphics Notes: BAR1 / Visible vRAM Size: 512 MB - vBIOS Version: 113-RENOIR-026Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
B C Processor: AMD Ryzen 7 4800U @ 1.80GHz (8 Cores / 16 Threads), Motherboard: ASRock 4X4-4000 (P1.30Q BIOS), Chipset: AMD Renoir/Cezanne, Memory: 16GB, Disk: 512GB TS512GMTS952T-I, Graphics: AMD Renoir 512MB (1750/400MHz), Audio: AMD Renoir Radeon HD Audio, Monitor: DELL P2415Q, Network: Realtek RTL8125 2.5GbE + Realtek RTL8111/8168/8411 + Intel 8265 / 8275
OS: Ubuntu 22.04, Kernel: 5.19.0-rc6-phx-retbleed (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.47), Vulkan: 1.3.204, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K B C A 0.6795 1.359 2.0385 2.718 3.3975 SE +/- 0.01, N = 3 3.02 2.99 2.97 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K B A C 3 6 9 12 15 SE +/- 0.03, N = 3 12.82 12.42 12.22 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K B C A 1.0485 2.097 3.1455 4.194 5.2425 SE +/- 0.01, N = 3 4.66 4.65 4.61 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K C B A 4 8 12 16 20 SE +/- 0.01, N = 3 16.39 16.39 15.97 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K B C A 5 10 15 20 25 SE +/- 0.03, N = 3 21.25 21.11 20.67 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K B C A 5 10 15 20 25 SE +/- 0.00, N = 3 21.25 21.12 20.77 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p C B A 0.0833 0.1666 0.2499 0.3332 0.4165 SE +/- 0.00, N = 3 0.37 0.37 0.36 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p B C A 2 4 6 8 10 SE +/- 0.07, N = 3 7.49 7.41 7.14 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p B A C 5 10 15 20 25 SE +/- 0.14, N = 3 22.79 21.78 21.42 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p B C A 4 8 12 16 20 SE +/- 0.02, N = 3 14.17 14.09 14.04 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p B C A 10 20 30 40 50 SE +/- 0.15, N = 3 43.21 43.18 43.11 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p C B A 12 24 36 48 60 SE +/- 0.03, N = 3 51.15 51.06 50.92 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p C A B 12 24 36 48 60 SE +/- 0.05, N = 3 51.85 51.79 51.47 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
BRL-CAD BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.32.6 VGR Performance Metric B C A 20K 40K 60K 80K 100K 90606 90523 90251 1. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -ldl -lm
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.3 Test: blosclz bitshuffle C B A 600 1200 1800 2400 3000 SE +/- 4.29, N = 3 2666.2 2664.1 2657.3 1. (CC) gcc options: -std=gnu99 -O3 -lrt -lm
ClickHouse ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, First Run / Cold Cache A B C 12 24 36 48 60 SE +/- 0.51, N = 9 51.35 48.86 47.73 MIN: 3.39 / MAX: 8571.43 MIN: 3.81 / MAX: 4615.38 MIN: 3.78 / MAX: 5454.55 1. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Second Run A C B 13 26 39 52 65 SE +/- 0.62, N = 9 56.13 55.91 55.21 MIN: 3.7 / MAX: 15000 MIN: 3.87 / MAX: 4285.71 MIN: 3.88 / MAX: 2608.7 1. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Third Run C A B 13 26 39 52 65 SE +/- 0.19, N = 9 58.84 57.57 54.60 MIN: 3.83 / MAX: 7500 MIN: 3.7 / MAX: 15000 MIN: 3.91 / MAX: 2857.14 1. ClickHouse server version 22.5.4.19 (official build).
Facebook RocksDB OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Fill B C A 120K 240K 360K 480K 600K SE +/- 1564.28, N = 3 551829 551546 549392 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Read B C A 6M 12M 18M 24M 30M SE +/- 268758.24, N = 5 26338023 26126775 25572231 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Update Random C B A 60K 120K 180K 240K 300K SE +/- 531.44, N = 3 294446 292449 291471 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Sequential Fill C A B 150K 300K 450K 600K 750K SE +/- 2562.12, N = 3 703946 695239 687072 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Fill Sync C A B 600 1200 1800 2400 3000 SE +/- 6.11, N = 3 2687 2662 2653 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read While Writing C B A 200K 400K 600K 800K 1000K SE +/- 5405.15, N = 3 1133798 1118927 1101434 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read Random Write Random C A B 200K 400K 600K 800K 1000K SE +/- 1407.09, N = 3 827539 824795 824645 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
GraphicsMagick This is a test of GraphicsMagick with its OpenMP implementation that performs various imaging tests on a sample 6000x4000 pixel JPEG image. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Swirl A B C 80 160 240 320 400 SE +/- 4.33, N = 3 355 349 346 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Rotate B C A 110 220 330 440 550 SE +/- 0.67, N = 3 521 516 496 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Sharpen A C B 20 40 60 80 100 SE +/- 0.88, N = 3 102 101 101 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Enhanced C B A 40 80 120 160 200 SE +/- 1.20, N = 3 160 160 159 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Resizing C B A 150 300 450 600 750 SE +/- 3.18, N = 3 712 712 689 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Noise-Gaussian C B A 40 80 120 160 200 SE +/- 0.33, N = 3 191 189 188 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: HWB Color Space B C A 130 260 390 520 650 SE +/- 0.88, N = 3 617 605 565 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: nasnet B C A 4 8 12 16 20 SE +/- 0.09, N = 3 16.97 17.30 17.57 MIN: 16.65 / MAX: 22.65 MIN: 17.04 / MAX: 32.64 MIN: 17.11 / MAX: 32.86 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenetV3 C B A 0.5911 1.1822 1.7733 2.3644 2.9555 SE +/- 0.027, N = 3 2.523 2.559 2.627 MIN: 2.46 / MAX: 3.45 MIN: 2.48 / MAX: 5.23 MIN: 2.51 / MAX: 4.18 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: squeezenetv1.1 C B A 1.1484 2.2968 3.4452 4.5936 5.742 SE +/- 0.085, N = 3 4.786 4.873 5.104 MIN: 4.66 / MAX: 6.44 MIN: 4.73 / MAX: 6.22 MIN: 4.83 / MAX: 17.44 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: resnet-v2-50 B C A 10 20 30 40 50 SE +/- 0.35, N = 3 41.91 42.39 43.68 MIN: 41.13 / MAX: 89.56 MIN: 41.67 / MAX: 57.54 MIN: 42.3 / MAX: 59.3 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: SqueezeNetV1.0 B C A 3 6 9 12 15 SE +/- 0.11, N = 3 10.26 10.28 10.58 MIN: 9.92 / MAX: 11.57 MIN: 9.81 / MAX: 16.54 MIN: 10.02 / MAX: 25.16 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: MobileNetV2_224 C B A 1.2922 2.5844 3.8766 5.1688 6.461 SE +/- 0.031, N = 3 5.519 5.574 5.743 MIN: 5.31 / MAX: 6.72 MIN: 5.39 / MAX: 11.74 MIN: 5.48 / MAX: 21.23 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 B C A 1.0883 2.1766 3.2649 4.3532 5.4415 SE +/- 0.027, N = 3 4.735 4.756 4.837 MIN: 4.55 / MAX: 19.53 MIN: 4.56 / MAX: 5.55 MIN: 4.62 / MAX: 5.59 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: inception-v3 C B A 13 26 39 52 65 SE +/- 0.22, N = 3 54.14 54.76 55.86 MIN: 53.36 / MAX: 68.97 MIN: 53.74 / MAX: 114.32 MIN: 54.66 / MAX: 164.93 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mobilenet C A B 8 16 24 32 40 SE +/- 0.06, N = 3 32.79 32.91 33.81 MIN: 31.99 / MAX: 34.11 MIN: 32.07 / MAX: 48.87 MIN: 33.19 / MAX: 34.81 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v2-v2 - Model: mobilenet-v2 C B A 3 6 9 12 15 SE +/- 0.01, N = 3 10.53 10.56 10.58 MIN: 10.03 / MAX: 12.39 MIN: 9.99 / MAX: 13.06 MIN: 9.93 / MAX: 11.96 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v3-v3 - Model: mobilenet-v3 A B C 2 4 6 8 10 SE +/- 0.00, N = 3 8.33 8.43 8.48 MIN: 7.91 / MAX: 12.1 MIN: 7.91 / MAX: 9.75 MIN: 8.01 / MAX: 9.92 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: shufflenet-v2 B C A 1.287 2.574 3.861 5.148 6.435 SE +/- 0.01, N = 3 5.64 5.71 5.72 MIN: 5.4 / MAX: 6.54 MIN: 5.41 / MAX: 6.71 MIN: 5.34 / MAX: 6.86 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mnasnet C B A 2 4 6 8 10 SE +/- 0.01, N = 3 6.90 6.93 6.98 MIN: 6.7 / MAX: 8.29 MIN: 6.68 / MAX: 8.21 MIN: 6.64 / MAX: 8.35 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: efficientnet-b0 C A B 4 8 12 16 20 SE +/- 0.00, N = 3 13.57 13.67 13.67 MIN: 12.93 / MAX: 15.25 MIN: 12.95 / MAX: 15.62 MIN: 12.95 / MAX: 15.45 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: blazeface A B C 0.5468 1.0936 1.6404 2.1872 2.734 SE +/- 0.00, N = 3 2.40 2.40 2.43 MIN: 2.31 / MAX: 3.34 MIN: 2.32 / MAX: 3.22 MIN: 2.35 / MAX: 3.11 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: googlenet B C A 7 14 21 28 35 SE +/- 0.05, N = 3 30.35 30.36 30.39 MIN: 29.67 / MAX: 31.69 MIN: 29.66 / MAX: 31.93 MIN: 29.53 / MAX: 63.12 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vgg16 A B C 30 60 90 120 150 SE +/- 0.10, N = 3 123.21 123.30 123.54 MIN: 122.18 / MAX: 128.35 MIN: 122.27 / MAX: 131.77 MIN: 122.7 / MAX: 165.33 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet18 C A B 6 12 18 24 30 SE +/- 0.09, N = 3 24.39 24.42 24.47 MIN: 24.07 / MAX: 26.05 MIN: 23.93 / MAX: 47.11 MIN: 24.12 / MAX: 25.52 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: alexnet A C B 4 8 12 16 20 SE +/- 0.05, N = 3 16.82 16.84 16.90 MIN: 16.41 / MAX: 18.04 MIN: 16.47 / MAX: 17.4 MIN: 16.54 / MAX: 17.43 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet50 A C B 11 22 33 44 55 SE +/- 0.10, N = 3 49.04 49.11 50.29 MIN: 48.32 / MAX: 50.3 MIN: 48.61 / MAX: 49.99 MIN: 49.25 / MAX: 92.13 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: yolov4-tiny C A B 13 26 39 52 65 SE +/- 0.24, N = 3 53.68 53.93 56.11 MIN: 52.89 / MAX: 54.49 MIN: 52.76 / MAX: 59.52 MIN: 55.41 / MAX: 67.95 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: squeezenet_ssd C A B 9 18 27 36 45 SE +/- 0.16, N = 3 40.25 40.60 40.68 MIN: 39.04 / MAX: 41.52 MIN: 39.28 / MAX: 100.58 MIN: 39.81 / MAX: 41.84 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: regnety_400m C A B 4 8 12 16 20 SE +/- 0.03, N = 3 15.85 15.88 15.90 MIN: 15.49 / MAX: 17.17 MIN: 15.38 / MAX: 17.56 MIN: 15.44 / MAX: 25.7 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vision_transformer C A B 60 120 180 240 300 SE +/- 0.84, N = 3 273.34 277.23 278.88 MIN: 270.15 / MAX: 281.46 MIN: 272.75 / MAX: 341.78 MIN: 276.41 / MAX: 290.32 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: FastestDet B C A 2 4 6 8 10 SE +/- 0.07, N = 3 6.87 6.93 6.99 MIN: 6.69 / MAX: 7.26 MIN: 6.72 / MAX: 10.54 MIN: 6.62 / MAX: 7.84 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: mobilenet B C A 4 8 12 16 20 SE +/- 0.89, N = 12 14.54 14.77 15.69 MIN: 13.65 / MAX: 16.99 MIN: 13.84 / MAX: 30.44 MIN: 13.68 / MAX: 34.77 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 A B C 2 4 6 8 10 SE +/- 0.06, N = 12 5.90 6.04 6.06 MIN: 4.78 / MAX: 7.26 MIN: 5.43 / MAX: 6.83 MIN: 5.47 / MAX: 7.02 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 B A C 2 4 6 8 10 SE +/- 0.04, N = 12 6.30 6.41 6.60 MIN: 5.78 / MAX: 7.35 MIN: 5.42 / MAX: 7.73 MIN: 5.59 / MAX: 7.35 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: shufflenet-v2 B A C 1.0643 2.1286 3.1929 4.2572 5.3215 SE +/- 0.07, N = 12 4.45 4.52 4.73 MIN: 3.68 / MAX: 5.47 MIN: 3.47 / MAX: 5.95 MIN: 3.71 / MAX: 5.77 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: mnasnet A B C 1.3455 2.691 4.0365 5.382 6.7275 SE +/- 0.02, N = 12 5.89 5.90 5.98 MIN: 4.8 / MAX: 7.01 MIN: 5.24 / MAX: 6.51 MIN: 5.06 / MAX: 6.96 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: efficientnet-b0 B C A 3 6 9 12 15 SE +/- 0.03, N = 12 13.03 13.03 13.11 MIN: 12.1 / MAX: 13.9 MIN: 11.94 / MAX: 13.96 MIN: 11.9 / MAX: 14.23 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: blazeface B C A 0.351 0.702 1.053 1.404 1.755 SE +/- 0.00, N = 12 1.55 1.55 1.56 MIN: 1.49 / MAX: 2.35 MIN: 1.5 / MAX: 2.11 MIN: 1.49 / MAX: 2.32 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: googlenet C A B 3 6 9 12 15 SE +/- 0.05, N = 12 11.42 11.50 11.52 MIN: 10.68 / MAX: 12.44 MIN: 10.33 / MAX: 12.79 MIN: 10.69 / MAX: 12.47 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: vgg16 B A C 9 18 27 36 45 SE +/- 0.02, N = 12 40.63 40.67 40.72 MIN: 39.96 / MAX: 41.46 MIN: 39.9 / MAX: 42.18 MIN: 40.3 / MAX: 41.82 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: resnet18 A B C 3 6 9 12 15 SE +/- 0.03, N = 12 9.16 9.26 9.37 MIN: 8.38 / MAX: 10.82 MIN: 8.36 / MAX: 10.21 MIN: 8.44 / MAX: 10.4 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: alexnet A C B 3 6 9 12 15 SE +/- 0.02, N = 12 11.33 11.36 11.42 MIN: 10.56 / MAX: 12.69 MIN: 10.62 / MAX: 12.35 MIN: 10.6 / MAX: 12.34 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: resnet50 A B C 5 10 15 20 25 SE +/- 0.04, N = 12 18.16 18.23 18.29 MIN: 17.15 / MAX: 19.52 MIN: 17.32 / MAX: 19.43 MIN: 17.37 / MAX: 19.04 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: yolov4-tiny B A C 5 10 15 20 25 SE +/- 0.62, N = 12 19.91 21.99 22.26 MIN: 18.96 / MAX: 29.39 MIN: 18.94 / MAX: 43.4 MIN: 18.95 / MAX: 35.32 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: squeezenet_ssd B A C 3 6 9 12 15 SE +/- 0.02, N = 12 11.46 11.47 11.49 MIN: 10.74 / MAX: 22.86 MIN: 10.38 / MAX: 28.65 MIN: 10.67 / MAX: 14.52 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: regnety_400m A C B 2 4 6 8 10 SE +/- 0.05, N = 12 7.74 7.83 7.97 MIN: 6.16 / MAX: 8.79 MIN: 6.68 / MAX: 8.65 MIN: 7.32 / MAX: 8.8 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: vision_transformer C A B 110 220 330 440 550 SE +/- 1.50, N = 12 485.92 495.39 496.55 MIN: 459.84 / MAX: 511.86 MIN: 454.64 / MAX: 937.46 MIN: 469.85 / MAX: 519.55 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: Vulkan GPU - Model: FastestDet B A C 1.2623 2.5246 3.7869 5.0492 6.3115 SE +/- 0.07, N = 12 5.13 5.16 5.61 MIN: 3.75 / MAX: 6.05 MIN: 3.72 / MAX: 6.4 MIN: 4.66 / MAX: 6.65 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
oneDNN OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU B A C 4 8 12 16 20 SE +/- 0.01, N = 3 14.19 14.28 14.28 MIN: 13.89 MIN: 13.95 MIN: 13.99 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU C B A 5 10 15 20 25 SE +/- 0.02, N = 3 19.69 20.38 20.90 MIN: 18.44 MIN: 18.83 MIN: 19.26 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU C B A 0.7465 1.493 2.2395 2.986 3.7325 SE +/- 0.00316, N = 3 3.23184 3.24690 3.31774 MIN: 3.04 MIN: 3.03 MIN: 3.06 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU A B C 1.0833 2.1666 3.2499 4.3332 5.4165 SE +/- 0.00172, N = 3 4.80714 4.81270 4.81470 MIN: 4.69 MIN: 4.71 MIN: 4.7 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU
A: The test run did not produce a result.
B: The test run did not produce a result.
C: The test run did not produce a result.
Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU
A: The test run did not produce a result.
B: The test run did not produce a result.
C: The test run did not produce a result.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU A C B 12 24 36 48 60 SE +/- 0.01, N = 3 51.34 51.35 51.36 MIN: 50.69 MIN: 50.16 MIN: 50.7 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU A C B 3 6 9 12 15 SE +/- 0.08, N = 12 10.76 10.80 11.46 MIN: 7.8 MIN: 7.93 MIN: 7.85 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU B A C 3 6 9 12 15 SE +/- 0.02024, N = 3 9.54653 9.63379 9.63536 MIN: 9.11 MIN: 9.05 MIN: 9.21 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU C B A 11 22 33 44 55 SE +/- 0.08, N = 3 48.42 48.52 48.59 MIN: 48.01 MIN: 48.04 MIN: 47.93 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU C B A 1.0606 2.1212 3.1818 4.2424 5.303 SE +/- 0.01121, N = 3 4.63135 4.64717 4.71385 MIN: 4.24 MIN: 4.31 MIN: 4.27 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU C B A 1.3321 2.6642 3.9963 5.3284 6.6605 SE +/- 0.01821, N = 3 5.79261 5.83337 5.92046 MIN: 5.3 MIN: 5.16 MIN: 5.22 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU B C A 2K 4K 6K 8K 10K SE +/- 7.23, N = 3 8629.97 8640.73 8686.14 MIN: 8579.92 MIN: 8601.17 MIN: 8636.96 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU C B A 2K 4K 6K 8K 10K SE +/- 6.19, N = 3 7944.16 7975.32 8051.80 MIN: 7909.95 MIN: 7952.97 MIN: 8018.11 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU C A B 2K 4K 6K 8K 10K SE +/- 5.30, N = 3 8614.66 8637.51 8686.17 MIN: 8579.22 MIN: 8594.99 MIN: 8651.11 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU
A: The test run did not produce a result.
B: The test run did not produce a result.
C: The test run did not produce a result.
Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU
A: The test run did not produce a result.
B: The test run did not produce a result.
C: The test run did not produce a result.
Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU
A: The test run did not produce a result.
B: The test run did not produce a result.
C: The test run did not produce a result.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU C B A 2K 4K 6K 8K 10K SE +/- 10.30, N = 3 7884.07 7918.33 8025.41 MIN: 7871.32 MIN: 7896.96 MIN: 7981.52 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU A B C 3 6 9 12 15 SE +/- 0.00123, N = 3 9.70453 9.71677 9.72365 MIN: 9.54 MIN: 9.57 MIN: 9.57 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU C B A 2K 4K 6K 8K 10K SE +/- 22.45, N = 3 8586.80 8592.08 8630.92 MIN: 8549.18 MIN: 8562.35 MIN: 8567.27 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU C B A 2K 4K 6K 8K 10K SE +/- 10.88, N = 3 7857.71 7887.60 8063.85 MIN: 7839.69 MIN: 7865.46 MIN: 8026.67 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU B A C 2 4 6 8 10 SE +/- 0.00466, N = 3 6.78025 6.79016 6.79224 MIN: 6.4 MIN: 6.34 MIN: 6.32 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU
A: The test run did not produce a result.
B: The test run did not produce a result.
C: The test run did not produce a result.
OpenFOAM OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: motorBike - Mesh Time C A B 20 40 60 80 100 80.67 81.71 82.13 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: motorBike - Execution Time A C B 100 200 300 400 500 460.35 461.84 462.05 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time B A C 20 40 60 80 100 106.19 106.19 106.78 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Execution Time A C B 300 600 900 1200 1500 1434.16 1436.57 1440.02 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -ldynamicMesh -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU A C B 0.2993 0.5986 0.8979 1.1972 1.4965 SE +/- 0.00, N = 3 1.33 1.32 1.32 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU A B C 600 1200 1800 2400 3000 SE +/- 1.33, N = 3 3002.50 3014.86 3015.78 MIN: 2859.06 / MAX: 3143.06 MIN: 2920.66 / MAX: 3180.59 MIN: 2912.64 / MAX: 3118.77 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU C A B 0.1958 0.3916 0.5874 0.7832 0.979 SE +/- 0.00, N = 3 0.87 0.87 0.86 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU C A B 1000 2000 3000 4000 5000 SE +/- 23.10, N = 3 4489.79 4534.61 4573.79 MIN: 3799.84 / MAX: 5229.44 MIN: 3707.14 / MAX: 5216.88 MIN: 3896.02 / MAX: 5195.82 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU C A B 0.1958 0.3916 0.5874 0.7832 0.979 SE +/- 0.00, N = 3 0.87 0.86 0.85 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU C A B 1000 2000 3000 4000 5000 SE +/- 13.58, N = 3 4537.87 4596.84 4638.17 MIN: 3774.07 / MAX: 5226.26 MIN: 3807.31 / MAX: 5161.95 MIN: 3951.69 / MAX: 5196.3 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU C A B 16 32 48 64 80 SE +/- 0.23, N = 3 74.06 73.97 72.24 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU C A B 12 24 36 48 60 SE +/- 0.17, N = 3 53.90 53.97 55.27 MIN: 30.6 / MAX: 87.57 MIN: 26.23 / MAX: 79.93 MIN: 33.76 / MAX: 82.8 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU C B A 0.4748 0.9496 1.4244 1.8992 2.374 SE +/- 0.00, N = 3 2.11 2.10 2.09 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU C B A 400 800 1200 1600 2000 SE +/- 2.42, N = 3 1890.83 1902.89 1912.86 MIN: 1836.96 / MAX: 1913.63 MIN: 1854.25 / MAX: 1946.21 MIN: 1858.14 / MAX: 1948.83 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU C B A 30 60 90 120 150 SE +/- 0.44, N = 3 145.10 144.27 143.31 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU C B A 7 14 21 28 35 SE +/- 0.08, N = 3 27.52 27.68 27.87 MIN: 23.18 / MAX: 58.55 MIN: 22.72 / MAX: 56.8 MIN: 23.08 / MAX: 56.78 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU B C A 40 80 120 160 200 SE +/- 0.18, N = 3 159.45 158.87 157.66 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU B C A 6 12 18 24 30 SE +/- 0.03, N = 3 25.05 25.14 25.33 MIN: 22.53 / MAX: 48.72 MIN: 23.44 / MAX: 48.41 MIN: 20.28 / MAX: 48.91 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU A C B 3 6 9 12 15 SE +/- 0.05, N = 3 13.53 13.35 13.25 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU A C B 70 140 210 280 350 SE +/- 1.05, N = 3 295.20 299.12 301.38 MIN: 206.56 / MAX: 367 MIN: 209.85 / MAX: 325.38 MIN: 213.65 / MAX: 322.07 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU B C A 50 100 150 200 250 SE +/- 0.12, N = 3 210.50 210.23 207.54 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU B C A 9 18 27 36 45 SE +/- 0.02, N = 3 37.94 38.00 38.50 MIN: 29.14 / MAX: 75.36 MIN: 30.09 / MAX: 74.11 MIN: 31.49 / MAX: 77.17 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU B C A 40 80 120 160 200 SE +/- 1.36, N = 3 170.12 166.95 164.98 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU B C A 6 12 18 24 30 SE +/- 0.20, N = 3 23.47 23.91 24.20 MIN: 19.71 / MAX: 47.02 MIN: 17.22 / MAX: 43.65 MIN: 16.28 / MAX: 46.34 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU B C A 600 1200 1800 2400 3000 SE +/- 4.92, N = 3 2749.47 2747.24 2727.54 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU B C A 0.648 1.296 1.944 2.592 3.24 SE +/- 0.01, N = 3 2.85 2.86 2.88 MIN: 1.94 / MAX: 7.85 MIN: 1.79 / MAX: 15.09 MIN: 1.82 / MAX: 29.13 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU B A C 800 1600 2400 3200 4000 SE +/- 17.35, N = 3 3684.90 3641.73 3637.99 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU B A C 0.477 0.954 1.431 1.908 2.385 SE +/- 0.01, N = 3 2.10 2.12 2.12 MIN: 1.26 / MAX: 4.29 MIN: 1.06 / MAX: 26.87 MIN: 1.09 / MAX: 5.35 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
QuadRay VectorChief's QuadRay is a real-time ray-tracing engine written to support SIMD across ARM, MIPS, PPC, and x86/x86_64 processors. QuadRay supports SSE/SSE2/SSE4 and AVX/AVX2/AVX-512 usage on Intel/AMD CPUs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 1 - Resolution: 4K C B A 1.0058 2.0116 3.0174 4.0232 5.029 SE +/- 0.01, N = 3 4.47 4.47 4.44 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 2 - Resolution: 4K C B A 0.306 0.612 0.918 1.224 1.53 SE +/- 0.00, N = 3 1.36 1.36 1.34 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 3 - Resolution: 4K C B A 0.27 0.54 0.81 1.08 1.35 SE +/- 0.00, N = 3 1.20 1.19 1.19 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 5 - Resolution: 4K C B A 0.0743 0.1486 0.2229 0.2972 0.3715 SE +/- 0.00, N = 3 0.33 0.33 0.32 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 1 - Resolution: 1080p B C A 4 8 12 16 20 SE +/- 0.10, N = 3 16.81 13.32 13.16 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 2 - Resolution: 1080p B A C 1.1858 2.3716 3.5574 4.7432 5.929 SE +/- 0.05, N = 15 5.27 5.07 5.06 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 3 - Resolution: 1080p B C A 1.0553 2.1106 3.1659 4.2212 5.2765 SE +/- 0.05, N = 3 4.69 4.68 4.37 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
OpenBenchmarking.org FPS, More Is Better QuadRay 2022.05.25 Scene: 5 - Resolution: 1080p C B A 0.2948 0.5896 0.8844 1.1792 1.474 SE +/- 0.00, N = 3 1.31 1.30 1.27 1. (CXX) g++ options: -O3 -pthread -lm -lstdc++ -lX11 -lXext -lpthread
spaCy The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org tokens/sec, More Is Better spaCy 3.4.1 Model: en_core_web_lg A B C 2K 4K 6K 8K 10K SE +/- 60.05, N = 3 10169 10042 10023
srsRAN srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Samples / Second, More Is Better srsRAN 22.04.1 Test: OFDM_Test A C B 30M 60M 90M 120M 150M SE +/- 1325194.21, N = 15 128073333 121700000 119400000 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM A B C 60 120 180 240 300 SE +/- 1.48, N = 3 274.2 268.5 268.3 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM B A C 20 40 60 80 100 SE +/- 0.22, N = 3 106.2 106.2 105.5 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM C A B 80 160 240 320 400 SE +/- 0.85, N = 3 348.5 346.7 338.3 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM C A B 30 60 90 120 150 SE +/- 0.35, N = 3 151.7 151.5 148.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM B A C 70 140 210 280 350 SE +/- 0.66, N = 3 301.7 301.7 300.0 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM C B A 30 60 90 120 150 SE +/- 0.21, N = 3 114.4 114.1 114.0 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM B A C 80 160 240 320 400 SE +/- 2.89, N = 3 382.2 375.5 372.8 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM B A C 40 80 120 160 200 SE +/- 1.39, N = 3 162.9 160.4 159.1 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM A B C 20 40 60 80 100 SE +/- 0.45, N = 3 98.4 97.5 96.7 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM A B C 12 24 36 48 60 SE +/- 0.10, N = 3 54.7 53.8 53.5 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
SVT-AV1 OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 4 - Input: Bosphorus 4K C B A 0.2205 0.441 0.6615 0.882 1.1025 SE +/- 0.002, N = 3 0.980 0.978 0.972 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 8 - Input: Bosphorus 4K A C B 5 10 15 20 25 SE +/- 0.19, N = 3 18.48 18.34 18.25 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 10 - Input: Bosphorus 4K A C B 8 16 24 32 40 SE +/- 0.13, N = 3 33.66 30.13 29.81 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 12 - Input: Bosphorus 4K C B A 10 20 30 40 50 SE +/- 0.07, N = 3 45.12 45.10 44.00 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 4 - Input: Bosphorus 1080p C B A 0.7076 1.4152 2.1228 2.8304 3.538 SE +/- 0.008, N = 3 3.145 3.125 3.106 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 8 - Input: Bosphorus 1080p C B A 12 24 36 48 60 SE +/- 0.13, N = 3 51.11 50.38 50.24 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 10 - Input: Bosphorus 1080p B C A 20 40 60 80 100 SE +/- 0.25, N = 3 110.72 110.41 108.08 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.2 Encoder Mode: Preset 12 - Input: Bosphorus 1080p C B A 40 80 120 160 200 SE +/- 0.65, N = 3 171.42 169.27 165.90 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
TensorFlow This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: AlexNet C B A 6 12 18 24 30 SE +/- 0.02, N = 3 23.84 23.83 23.71
Timed Wasmer Compilation This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Wasmer Compilation 2.3 Time To Compile C A B 30 60 90 120 150 SE +/- 0.22, N = 3 125.92 126.24 126.41 1. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs
Unvanquished Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: High C A B 30 60 90 120 150 SE +/- 1.33, N = 15 143.8 139.1 137.8
OpenBenchmarking.org MP/s, More Is Better WebP Image Encode 1.2.4 Encode Settings: Quality 100, Highest Compression B C A 0.7268 1.4536 2.1804 2.9072 3.634 SE +/- 0.03, N = 3 3.23 3.22 3.22 1. (CC) gcc options: -fvisibility=hidden -O2 -lm
WebP2 Image Encode This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Default A C B 1.0868 2.1736 3.2604 4.3472 5.434 SE +/- 0.01, N = 3 4.83 4.77 4.74 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
A Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8600103Graphics Notes: BAR1 / Visible vRAM Size: 512 MB - vBIOS Version: 113-RENOIR-026Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 9 October 2022 18:53 by user phoronix.
B Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8600103Graphics Notes: BAR1 / Visible vRAM Size: 512 MB - vBIOS Version: 113-RENOIR-026Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 10 October 2022 08:48 by user phoronix.
C Processor: AMD Ryzen 7 4800U @ 1.80GHz (8 Cores / 16 Threads), Motherboard: ASRock 4X4-4000 (P1.30Q BIOS), Chipset: AMD Renoir/Cezanne, Memory: 16GB, Disk: 512GB TS512GMTS952T-I, Graphics: AMD Renoir 512MB (1750/400MHz), Audio: AMD Renoir Radeon HD Audio, Monitor: DELL P2415Q, Network: Realtek RTL8125 2.5GbE + Realtek RTL8111/8168/8411 + Intel 8265 / 8275
OS: Ubuntu 22.04, Kernel: 5.19.0-rc6-phx-retbleed (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.47), Vulkan: 1.3.204, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8600103Graphics Notes: BAR1 / Visible vRAM Size: 512 MB - vBIOS Version: 113-RENOIR-026Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional IBRS_FW STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 10 October 2022 13:35 by user phoronix.