june AMD Ryzen 9 3900XT 12-Core testing with a MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.B3 BIOS) and AMD Radeon RX 56/64 8GB on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2206054-PTS-JUNE759444&sor&gru .
june Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution A B C D AMD Ryzen 9 3900XT 12-Core @ 3.80GHz (12 Cores / 24 Threads) MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.B3 BIOS) AMD Starship/Matisse 16GB 500GB Seagate FireCuda 520 SSD ZP500GM30002 AMD Radeon RX 56/64 8GB (1630/945MHz) AMD Vega 10 HDMI Audio ASUS MG28U Realtek Device 2600 + Realtek Killer E3000 2.5GbE + Intel Wi-Fi 6 AX200 Ubuntu 22.04 5.15.0-22-generic (x86_64) GNOME Shell 41.3 X Server + Wayland 4.6 Mesa 21.3.5 (LLVM 12.0.1) 1.2.195 GCC 11.2.0 ext4 3840x2160 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.42) 1.3.204 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - A: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XWYfV6/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XWYfV6/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - B: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - C: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - D: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8701021 Graphics Details - BAR1 / Visible vRAM Size: 256 MB - vBIOS Version: 113-D0500100-102 Java Details - OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1) Python Details - A: Python 3.10.2 - B: Python 3.10.4 - C: Python 3.10.4 - D: Python 3.10.4 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
june stress-ng: MMAP stress-ng: NUMA stress-ng: Futex stress-ng: MEMFD stress-ng: Atomic stress-ng: Crypto stress-ng: Malloc stress-ng: Forking stress-ng: IO_uring stress-ng: SENDFILE stress-ng: CPU Cache stress-ng: CPU Stress stress-ng: Semaphores stress-ng: Matrix Math stress-ng: Vector Math stress-ng: Memory Copying stress-ng: Socket Activity stress-ng: Context Switching stress-ng: Glibc C String Functions stress-ng: Glibc Qsort Data Sorting stress-ng: System V Message Passing gravitymark: 1920 x 1080 - OpenGL gravitymark: 1920 x 1080 - Vulkan gravitymark: 2560 x 1440 - OpenGL gravitymark: 2560 x 1440 - Vulkan gravitymark: 3840 x 2160 - OpenGL gravitymark: 3840 x 2160 - Vulkan svt-av1: Preset 4 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 10 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 4 - Bosphorus 1080p svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 10 - Bosphorus 1080p svt-av1: Preset 12 - Bosphorus 1080p svt-hevc: 1 - Bosphorus 4K svt-hevc: 7 - Bosphorus 4K svt-hevc: 10 - Bosphorus 4K svt-hevc: 1 - Bosphorus 1080p svt-hevc: 7 - Bosphorus 1080p svt-hevc: 10 - Bosphorus 1080p svt-vp9: VMAF Optimized - Bosphorus 4K svt-vp9: VMAF Optimized - Bosphorus 1080p svt-vp9: PSNR/SSIM Optimized - Bosphorus 4K svt-vp9: PSNR/SSIM Optimized - Bosphorus 1080p svt-vp9: Visual Quality Optimized - Bosphorus 4K svt-vp9: Visual Quality Optimized - Bosphorus 1080p x264: Bosphorus 4K x264: Bosphorus 1080p simdjson: Kostya simdjson: TopTweet simdjson: LargeRand simdjson: PartialTweets simdjson: DistinctUserID perf-bench: Memcpy 1MB perf-bench: Memset 1MB onnx: GPT-2 - CPU - Parallel onnx: GPT-2 - CPU - Standard onnx: yolov4 - CPU - Parallel onnx: yolov4 - CPU - Standard onnx: bertsquad-12 - CPU - Parallel onnx: bertsquad-12 - CPU - Standard onnx: fcn-resnet101-11 - CPU - Parallel onnx: fcn-resnet101-11 - CPU - Standard onnx: ArcFace ResNet-100 - CPU - Parallel onnx: ArcFace ResNet-100 - CPU - Standard onnx: super-resolution-10 - CPU - Parallel onnx: super-resolution-10 - CPU - Standard nettle: aes256 nettle: chacha nettle: sha512 nettle: poly1305-aes etcpak: Multi-Threaded - DXT1 etcpak: Multi-Threaded - ETC2 etcpak: Single-Threaded - DXT1 etcpak: Single-Threaded - ETC2 gromacs: MPI CPU - water_GMX50_bare java-jmh: Throughput perf-bench: Epoll Wait perf-bench: Futex Hash perf-bench: Sched Pipe perf-bench: Futex Lock-Pi perf-bench: Syscall Basic influxdb: 4 - 10000 - 2,5000,1 - 10000 influxdb: 64 - 10000 - 2,5000,1 - 10000 influxdb: 1024 - 10000 - 2,5000,1 - 10000 tensorflow-lite: SqueezeNet tensorflow-lite: Inception V4 tensorflow-lite: NASNet Mobile tensorflow-lite: Mobilenet Float tensorflow-lite: Mobilenet Quant tensorflow-lite: Inception ResNet V2 renaissance: Scala Dotty renaissance: Rand Forest renaissance: ALS Movie Lens renaissance: Apache Spark ALS renaissance: Apache Spark Bayes renaissance: Savina Reactors.IO renaissance: Apache Spark PageRank renaissance: Finagle HTTP Requests renaissance: In-Memory Database Shootout renaissance: Akka Unbalanced Cobwebbed Tree renaissance: Genetic Algorithm Using Jenetics + Futures onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Recurrent Neural Network Inference - u8s8f32 - CPU onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU glibc-bench: cos glibc-bench: exp glibc-bench: ffs glibc-bench: sin glibc-bench: log2 glibc-bench: modf glibc-bench: sinh glibc-bench: sqrt glibc-bench: tanh glibc-bench: asinh glibc-bench: atanh glibc-bench: ffsll glibc-bench: sincos glibc-bench: pthread_once avifenc: 0 avifenc: 2 avifenc: 6 avifenc: 6, Lossless avifenc: 10, Lossless webp2: Default webp2: Quality 75, Compression Effort 7 webp2: Quality 95, Compression Effort 7 webp2: Quality 100, Compression Effort 5 webp2: Quality 100, Lossless Compression A B C D 296.27 211.02 2786611.95 850.42 576128.96 21561.58 16082432.37 52768.88 35619.31 219949.89 156.96 29249.94 2483545.87 58365.05 86846.17 4844.13 9558.26 4520622.35 1991624.53 188.3 7796050.7 107.2 105.3 91.1 90.6 65.7 65 2.467 30.777 72.339 95.65 6.229 116.763 223.218 352.206 2.82 42.2 75.56 11 151.75 274.85 44.54 179.51 43.96 180 45.49 168 35.58 137.76 2.93 4.68 1.01 3.83 4.59 14.624821 69.022774 4580 5775 244 365 432 547 60 59 1117 1119 4541 4435 6059.02 1068.17 637.04 3216.07 3072.047 3061.536 236.904 232.166 1.047 23214280399.043 42469 4693851 326839 542 17962305 909621 1236274.6 1293571.6 206917 2445920 2271630 240371 44820.6 2330410 778.7 743.4 12757.9 3211.3 2126.2 8321.1 3174.2 3814.1 3614.9 12754.4 2666.5 73.1795 73.3569 59.1302 36.7089 36.9585 92.1343 23.2198 41.1929 47.9482 12.031 32226.3 25968.5 33601.7 30995.1 30.0269 35077.9 28223.8 36.1728 73.9182 16.1858 6.09947 65.3259 21.5269 7.09614 26.9821 7.86042 38.3749 30.644 38.3398 7.04916 45.9757 6.23097 134.16 66.028 8.089 11.431 5.95 2.972 151.784 323.97 4.242 742.934 300.58 328.2 2871422.77 871.88 577614.81 22488.17 16996416.51 54543.49 34951.82 216302.59 152.09 30044.54 2481083.26 60985 90749.35 5038.57 9191.93 4833096.84 2026796.06 196.5 8003229.07 106.5 105.3 90.4 90.4 65.3 64.5 2.563 36.48 76.445 98.212 6.44 117.447 232.831 350.931 2.95 50.39 78.87 11.63 159.49 290.42 45.21 184.6 46.88 187.87 49.51 176.38 35.64 141.25 2.92 4.61 1.04 3.99 4.72 16.239476 72.014007 4858 6058 256 323 449 571 68 78 1165 1660 4619 4324 6131.07 1185.58 658.67 3212.21 3156.579 3146.367 237.857 246.144 1.138 24255951266.374 45429 4904887 340408 669 19410883 852735.1 1240324.6 1291849.9 2898.87 42001 15355.6 2411.06 3096.7 42072.8 833.9 721.2 12616.7 3133.9 2104.4 7969.9 3153.2 3571.9 3646.0 12943.4 2888.2 4.70466 11.9803 1.80376 0.933945 22.587 5.65439 5.25702 24.9072 2.44129 3.40615 4089.48 2485.34 4101.51 2496.51 1.24528 4105.68 2483.66 1.85271 68.5977 16.4404 5.68026 62.1677 19.4563 6.53862 24.8246 7.27802 35.6057 31.7456 37.458 6.46721 41.7435 6.05998 129.526 62.502 7.654 10.878 5.648 2.886 146.316 309.655 4.11 721.681 300.57 326.62 2787436.17 869.61 580078.48 22493.23 16871709.82 54932.41 34815.95 246261.31 158.82 29333.09 2485080.45 61156.12 90779.38 5039.59 9185.34 4842631.56 2080090.43 197.1 7995516.42 106.5 105.6 91.7 90.5 65.2 64.4 2.556 36.35 78.409 98.216 6.435 117.33 230.171 351.994 2.96 50.44 79.86 11.63 158.6 287.22 45.12 183.53 46.52 187.19 49.84 176.32 35.36 140.99 2.89 4.61 1.04 4 4.55 14.765638 73.955339 4859 6445 257 337 449 567 68 60 1163 1183 4586 4511 6541.11 1151.97 637.71 3217.7 3169.25 3144.303 249.284 246.805 1.14 24266478961.702 45264 4910773 351971 663 17850063 847482 1237035.8 1291195 2894.31 42840.4 15359.5 2435.6 3146.6 42165.8 729.0 730.9 12677.3 3137.4 2149.7 7858.7 3116.0 3529.2 3824.8 12759.6 2871.2 4.7102 11.9752 1.79634 0.930888 22.5856 8.239 5.26468 24.9595 2.44352 3.38355 4133.63 2458.19 4092.33 2485.86 1.16856 4117.24 2492.88 2.2579 70.3809 16.1542 6.08636 64.7313 20.7323 6.71077 26.847 7.46367 38.0593 29.4417 35.8417 6.93142 41.7416 6.02658 128.777 63.47 7.718 10.764 5.601 3.003 148.841 308.71 4.14 722.894 298.14 331.54 3025044.82 867.3 576825.66 22466.9 16827898.02 54797.96 35188.01 192604.94 156.92 29002.22 2481553.24 61125.26 90727.65 5048.06 9193.24 4808874.54 2087104.71 195.83 7971710.38 106.3 105.5 91.2 90.7 65.7 65 2.561 36.587 79.787 99.572 6.428 119.133 226.788 372.283 2.95 50.62 79.6 11.61 158.69 288.05 45.2 185.11 46.83 187.34 49.86 180.36 34.85 141 2.98 4.66 1.05 4 4.72 14.845633 72.413917 4795 5962 257 453 451 570 68 88 1165 1154 4730 4423 6515.5 1095 637.39 3234.76 3162.38 3154.65 238.625 247.108 1.155 24234601965.181 45400 4910651 344956 649 19384879 816036 1195550.6 1240380.9 2888.78 42166.7 15326 2401.33 3113.9 41956.7 818.3 751.7 12638.5 3162.4 2113.8 8459.7 3117.0 3587.1 3732.7 12947.6 2903.9 4.70911 11.3832 1.81319 0.92482 22.5007 7.65826 5.27803 24.8524 2.43 3.40609 4126.65 2458.93 4110.74 2495.97 1.29107 4100.65 2473.22 1.99646 68.63 16.381 5.69216 60.5126 19.5047 6.54065 26.9788 7.28112 38.3276 29.4292 35.869 6.46463 42.8935 6.09718 128.396 62.152 7.619 10.981 5.644 2.896 147.237 310.504 4.14 721.401 OpenBenchmarking.org
Stress-NG Test: MMAP OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: MMAP B C D A 70 140 210 280 350 300.58 300.57 298.14 296.27 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: NUMA OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: NUMA D B C A 70 140 210 280 350 331.54 328.20 326.62 211.02 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Futex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Futex D B C A 600K 1200K 1800K 2400K 3000K 3025044.82 2871422.77 2787436.17 2786611.95 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: MEMFD OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: MEMFD B C D A 200 400 600 800 1000 871.88 869.61 867.30 850.42 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Atomic OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Atomic C B D A 120K 240K 360K 480K 600K 580078.48 577614.81 576825.66 576128.96 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Crypto OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Crypto C B D A 5K 10K 15K 20K 25K 22493.23 22488.17 22466.90 21561.58 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Malloc OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Malloc B C D A 4M 8M 12M 16M 20M 16996416.51 16871709.82 16827898.02 16082432.37 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Forking OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Forking C D B A 12K 24K 36K 48K 60K 54932.41 54797.96 54543.49 52768.88 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: IO_uring OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: IO_uring A D B C 8K 16K 24K 32K 40K 35619.31 35188.01 34951.82 34815.95 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: SENDFILE OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: SENDFILE C A B D 50K 100K 150K 200K 250K 246261.31 219949.89 216302.59 192604.94 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: CPU Cache OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: CPU Cache C A D B 40 80 120 160 200 158.82 156.96 156.92 152.09 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: CPU Stress OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: CPU Stress B C A D 6K 12K 18K 24K 30K 30044.54 29333.09 29249.94 29002.22 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Semaphores OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Semaphores C A D B 500K 1000K 1500K 2000K 2500K 2485080.45 2483545.87 2481553.24 2481083.26 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Matrix Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Matrix Math C D B A 13K 26K 39K 52K 65K 61156.12 61125.26 60985.00 58365.05 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Vector Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Vector Math C B D A 20K 40K 60K 80K 100K 90779.38 90749.35 90727.65 86846.17 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Memory Copying OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Memory Copying D C B A 1100 2200 3300 4400 5500 5048.06 5039.59 5038.57 4844.13 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Socket Activity OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Socket Activity A D B C 2K 4K 6K 8K 10K 9558.26 9193.24 9191.93 9185.34 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Context Switching OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Context Switching C B D A 1000K 2000K 3000K 4000K 5000K 4842631.56 4833096.84 4808874.54 4520622.35 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Glibc C String Functions OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Glibc C String Functions D C B A 400K 800K 1200K 1600K 2000K 2087104.71 2080090.43 2026796.06 1991624.53 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Glibc Qsort Data Sorting OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Glibc Qsort Data Sorting C B D A 40 80 120 160 200 197.10 196.50 195.83 188.30 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: System V Message Passing OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: System V Message Passing B C D A 2M 4M 6M 8M 10M 8003229.07 7995516.42 7971710.38 7796050.70 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
GravityMark Resolution: 1920 x 1080 - Renderer: OpenGL OpenBenchmarking.org Frames Per Second, More Is Better GravityMark 1.53 Resolution: 1920 x 1080 - Renderer: OpenGL A C B D 20 40 60 80 100 107.2 106.5 106.5 106.3
GravityMark Resolution: 1920 x 1080 - Renderer: Vulkan OpenBenchmarking.org Frames Per Second, More Is Better GravityMark 1.53 Resolution: 1920 x 1080 - Renderer: Vulkan C D B A 20 40 60 80 100 105.6 105.5 105.3 105.3
GravityMark Resolution: 2560 x 1440 - Renderer: OpenGL OpenBenchmarking.org Frames Per Second, More Is Better GravityMark 1.53 Resolution: 2560 x 1440 - Renderer: OpenGL C D A B 20 40 60 80 100 91.7 91.2 91.1 90.4
GravityMark Resolution: 2560 x 1440 - Renderer: Vulkan OpenBenchmarking.org Frames Per Second, More Is Better GravityMark 1.53 Resolution: 2560 x 1440 - Renderer: Vulkan D A C B 20 40 60 80 100 90.7 90.6 90.5 90.4
GravityMark Resolution: 3840 x 2160 - Renderer: OpenGL OpenBenchmarking.org Frames Per Second, More Is Better GravityMark 1.53 Resolution: 3840 x 2160 - Renderer: OpenGL D A B C 15 30 45 60 75 65.7 65.7 65.3 65.2
GravityMark Resolution: 3840 x 2160 - Renderer: Vulkan OpenBenchmarking.org Frames Per Second, More Is Better GravityMark 1.53 Resolution: 3840 x 2160 - Renderer: Vulkan D A B C 15 30 45 60 75 65.0 65.0 64.5 64.4
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.0 Encoder Mode: Preset 4 - Input: Bosphorus 4K B D C A 0.5767 1.1534 1.7301 2.3068 2.8835 2.563 2.561 2.556 2.467 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.0 Encoder Mode: Preset 8 - Input: Bosphorus 4K D B C A 8 16 24 32 40 36.59 36.48 36.35 30.78 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
SVT-AV1 Encoder Mode: Preset 10 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.0 Encoder Mode: Preset 10 - Input: Bosphorus 4K D C B A 20 40 60 80 100 79.79 78.41 76.45 72.34 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.0 Encoder Mode: Preset 12 - Input: Bosphorus 4K D C B A 20 40 60 80 100 99.57 98.22 98.21 95.65 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.0 Encoder Mode: Preset 4 - Input: Bosphorus 1080p B C D A 2 4 6 8 10 6.440 6.435 6.428 6.229 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.0 Encoder Mode: Preset 8 - Input: Bosphorus 1080p D B C A 30 60 90 120 150 119.13 117.45 117.33 116.76 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
SVT-AV1 Encoder Mode: Preset 10 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.0 Encoder Mode: Preset 10 - Input: Bosphorus 1080p B C D A 50 100 150 200 250 232.83 230.17 226.79 223.22 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.0 Encoder Mode: Preset 12 - Input: Bosphorus 1080p D A C B 80 160 240 320 400 372.28 352.21 351.99 350.93 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
SVT-HEVC Tuning: 1 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 1 - Input: Bosphorus 4K C D B A 0.666 1.332 1.998 2.664 3.33 2.96 2.95 2.95 2.82 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
SVT-HEVC Tuning: 7 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 7 - Input: Bosphorus 4K D C B A 11 22 33 44 55 50.62 50.44 50.39 42.20 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
SVT-HEVC Tuning: 10 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 10 - Input: Bosphorus 4K C D B A 20 40 60 80 100 79.86 79.60 78.87 75.56 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
SVT-HEVC Tuning: 1 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 1 - Input: Bosphorus 1080p C B D A 3 6 9 12 15 11.63 11.63 11.61 11.00 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
SVT-HEVC Tuning: 7 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 7 - Input: Bosphorus 1080p B D C A 40 80 120 160 200 159.49 158.69 158.60 151.75 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
SVT-HEVC Tuning: 10 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 10 - Input: Bosphorus 1080p B D C A 60 120 180 240 300 290.42 288.05 287.22 274.85 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
SVT-VP9 Tuning: VMAF Optimized - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: VMAF Optimized - Input: Bosphorus 4K B D C A 10 20 30 40 50 45.21 45.20 45.12 44.54 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
SVT-VP9 Tuning: VMAF Optimized - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: VMAF Optimized - Input: Bosphorus 1080p D B C A 40 80 120 160 200 185.11 184.60 183.53 179.51 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
SVT-VP9 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4K B D C A 11 22 33 44 55 46.88 46.83 46.52 43.96 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
SVT-VP9 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p B D C A 40 80 120 160 200 187.87 187.34 187.19 180.00 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
SVT-VP9 Tuning: Visual Quality Optimized - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: Visual Quality Optimized - Input: Bosphorus 4K D C B A 11 22 33 44 55 49.86 49.84 49.51 45.49 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
SVT-VP9 Tuning: Visual Quality Optimized - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: Visual Quality Optimized - Input: Bosphorus 1080p D B C A 40 80 120 160 200 180.36 176.38 176.32 168.00 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
x264 Video Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better x264 2022-02-22 Video Input: Bosphorus 4K B A C D 8 16 24 32 40 35.64 35.58 35.36 34.85 1. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -flto
x264 Video Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better x264 2022-02-22 Video Input: Bosphorus 1080p B D C A 30 60 90 120 150 141.25 141.00 140.99 137.76 1. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -flto
simdjson Throughput Test: Kostya OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: Kostya D A B C 0.6705 1.341 2.0115 2.682 3.3525 2.98 2.93 2.92 2.89 1. (CXX) g++ options: -O3
simdjson Throughput Test: TopTweet OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: TopTweet A D C B 1.053 2.106 3.159 4.212 5.265 4.68 4.66 4.61 4.61 1. (CXX) g++ options: -O3
simdjson Throughput Test: LargeRandom OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: LargeRandom D C B A 0.2363 0.4726 0.7089 0.9452 1.1815 1.05 1.04 1.04 1.01 1. (CXX) g++ options: -O3
simdjson Throughput Test: PartialTweets OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: PartialTweets D C B A 0.9 1.8 2.7 3.6 4.5 4.00 4.00 3.99 3.83 1. (CXX) g++ options: -O3
simdjson Throughput Test: DistinctUserID OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: DistinctUserID D B A C 1.062 2.124 3.186 4.248 5.31 4.72 4.72 4.59 4.55 1. (CXX) g++ options: -O3
perf-bench Benchmark: Memcpy 1MB OpenBenchmarking.org GB/sec, More Is Better perf-bench Benchmark: Memcpy 1MB B D C A 4 8 12 16 20 16.24 14.85 14.77 14.62 1. (CC) gcc options: -pthread -shared -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma
perf-bench Benchmark: Memset 1MB OpenBenchmarking.org GB/sec, More Is Better perf-bench Benchmark: Memset 1MB C D B A 16 32 48 64 80 73.96 72.41 72.01 69.02 1. (CC) gcc options: -pthread -shared -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma
ONNX Runtime Model: GPT-2 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Parallel C B D A 1000 2000 3000 4000 5000 4859 4858 4795 4580 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: GPT-2 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Standard C B D A 1400 2800 4200 5600 7000 6445 6058 5962 5775 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: yolov4 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: yolov4 - Device: CPU - Executor: Parallel D C B A 60 120 180 240 300 257 257 256 244 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: yolov4 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: yolov4 - Device: CPU - Executor: Standard D A C B 100 200 300 400 500 453 365 337 323 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: bertsquad-12 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Parallel D C B A 100 200 300 400 500 451 449 449 432 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: bertsquad-12 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Standard B D C A 120 240 360 480 600 571 570 567 547 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel D C B A 15 30 45 60 75 68 68 68 60 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: fcn-resnet101-11 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard D B C A 20 40 60 80 100 88 78 60 59 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel D B C A 300 600 900 1200 1500 1165 1165 1163 1117 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard B C D A 400 800 1200 1600 2000 1660 1183 1154 1119 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: super-resolution-10 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Parallel D B C A 1000 2000 3000 4000 5000 4730 4619 4586 4541 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: super-resolution-10 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Standard C A D B 1000 2000 3000 4000 5000 4511 4435 4423 4324 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
Nettle Test: aes256 OpenBenchmarking.org Mbyte/s, More Is Better Nettle 3.8 Test: aes256 C D B A 1400 2800 4200 5600 7000 6541.11 6515.50 6131.07 6059.02 MIN: 4692.01 / MAX: 10192.6 MIN: 4701.43 / MAX: 10170.91 MIN: 4421.4 / MAX: 9534.58 MIN: 4394.53 / MAX: 9359.29 1. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto
Nettle Test: chacha OpenBenchmarking.org Mbyte/s, More Is Better Nettle 3.8 Test: chacha B C D A 300 600 900 1200 1500 1185.58 1151.97 1095.00 1068.17 MIN: 574.66 / MAX: 3420.69 MIN: 558.73 / MAX: 3316.95 MIN: 529.55 / MAX: 3168.56 MIN: 514.62 / MAX: 3098.54 1. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto
Nettle Test: sha512 OpenBenchmarking.org Mbyte/s, More Is Better Nettle 3.8 Test: sha512 B C D A 140 280 420 560 700 658.67 637.71 637.39 637.04 1. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto
Nettle Test: poly1305-aes OpenBenchmarking.org Mbyte/s, More Is Better Nettle 3.8 Test: poly1305-aes D C A B 700 1400 2100 2800 3500 3234.76 3217.70 3216.07 3212.21 1. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto
Etcpak Benchmark: Multi-Threaded - Configuration: DXT1 OpenBenchmarking.org Mpx/s, More Is Better Etcpak 1.0 Benchmark: Multi-Threaded - Configuration: DXT1 C D B A 700 1400 2100 2800 3500 3169.25 3162.38 3156.58 3072.05 1. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
Etcpak Benchmark: Multi-Threaded - Configuration: ETC2 OpenBenchmarking.org Mpx/s, More Is Better Etcpak 1.0 Benchmark: Multi-Threaded - Configuration: ETC2 D B C A 700 1400 2100 2800 3500 3154.65 3146.37 3144.30 3061.54 1. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
Etcpak Benchmark: Single-Threaded - Configuration: DXT1 OpenBenchmarking.org Mpx/s, More Is Better Etcpak 1.0 Benchmark: Single-Threaded - Configuration: DXT1 C D B A 50 100 150 200 250 249.28 238.63 237.86 236.90 1. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
Etcpak Benchmark: Single-Threaded - Configuration: ETC2 OpenBenchmarking.org Mpx/s, More Is Better Etcpak 1.0 Benchmark: Single-Threaded - Configuration: ETC2 D C B A 50 100 150 200 250 247.11 246.81 246.14 232.17 1. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
GROMACS Implementation: MPI CPU - Input: water_GMX50_bare OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2022.1 Implementation: MPI CPU - Input: water_GMX50_bare D C B A 0.2599 0.5198 0.7797 1.0396 1.2995 1.155 1.140 1.138 1.047 1. (CXX) g++ options: -O3
Java JMH Throughput OpenBenchmarking.org Ops/s, More Is Better Java JMH Throughput C B D A 5000M 10000M 15000M 20000M 25000M 24266478961.70 24255951266.37 24234601965.18 23214280399.04
perf-bench Benchmark: Epoll Wait OpenBenchmarking.org ops/sec, More Is Better perf-bench Benchmark: Epoll Wait B D C A 10K 20K 30K 40K 50K 45429 45400 45264 42469 1. (CC) gcc options: -pthread -shared -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma
perf-bench Benchmark: Futex Hash OpenBenchmarking.org ops/sec, More Is Better perf-bench Benchmark: Futex Hash C D B A 1.1M 2.2M 3.3M 4.4M 5.5M 4910773 4910651 4904887 4693851 1. (CC) gcc options: -pthread -shared -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma
perf-bench Benchmark: Sched Pipe OpenBenchmarking.org ops/sec, More Is Better perf-bench Benchmark: Sched Pipe C D B A 80K 160K 240K 320K 400K 351971 344956 340408 326839 1. (CC) gcc options: -pthread -shared -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma
perf-bench Benchmark: Futex Lock-Pi OpenBenchmarking.org ops/sec, More Is Better perf-bench Benchmark: Futex Lock-Pi B C D A 140 280 420 560 700 669 663 649 542 1. (CC) gcc options: -pthread -shared -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma
perf-bench Benchmark: Syscall Basic OpenBenchmarking.org ops/sec, More Is Better perf-bench Benchmark: Syscall Basic B D A C 4M 8M 12M 16M 20M 19410883 19384879 17962305 17850063 1. (CC) gcc options: -pthread -shared -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma
InfluxDB Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 A B C D 200K 400K 600K 800K 1000K 909621.0 852735.1 847482.0 816036.0
InfluxDB Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 B C A D 300K 600K 900K 1200K 1500K 1240324.6 1237035.8 1236274.6 1195550.6
InfluxDB Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 A B C D 300K 600K 900K 1200K 1500K 1293571.6 1291849.9 1291195.0 1240380.9
TensorFlow Lite Model: SqueezeNet OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: SqueezeNet D C B A 40K 80K 120K 160K 200K 2888.78 2894.31 2898.87 206917.00
TensorFlow Lite Model: Inception V4 OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Inception V4 B D C A 500K 1000K 1500K 2000K 2500K 42001.0 42166.7 42840.4 2445920.0
TensorFlow Lite Model: NASNet Mobile OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: NASNet Mobile D B C A 500K 1000K 1500K 2000K 2500K 15326.0 15355.6 15359.5 2271630.0
TensorFlow Lite Model: Mobilenet Float OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Mobilenet Float D B C A 50K 100K 150K 200K 250K 2401.33 2411.06 2435.60 240371.00
TensorFlow Lite Model: Mobilenet Quant OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Mobilenet Quant B D C A 10K 20K 30K 40K 50K 3096.7 3113.9 3146.6 44820.6
TensorFlow Lite Model: Inception ResNet V2 OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 D B C A 500K 1000K 1500K 2000K 2500K 41956.7 42072.8 42165.8 2330410.0
Renaissance Test: Scala Dotty OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Scala Dotty C A D B 200 400 600 800 1000 729.0 778.7 818.3 833.9 MIN: 618.39 / MAX: 1281.64 MIN: 646.98 / MAX: 1439.03 MIN: 618.81 / MAX: 1331.73 MIN: 636.78 / MAX: 1324.63
Renaissance Test: Random Forest OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Random Forest B C A D 160 320 480 640 800 721.2 730.9 743.4 751.7 MIN: 609.48 / MAX: 834.11 MIN: 607.63 / MAX: 901.62 MIN: 665.12 / MAX: 887.13 MIN: 622.63 / MAX: 907.43
Renaissance Test: ALS Movie Lens OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: ALS Movie Lens B D C A 3K 6K 9K 12K 15K 12616.7 12638.5 12677.3 12757.9 MIN: 12616.69 / MAX: 13750.37 MAX: 13822.5 MIN: 12677.26 / MAX: 13923.11 MIN: 12757.86 / MAX: 14048.24
Renaissance Test: Apache Spark ALS OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Apache Spark ALS B C D A 700 1400 2100 2800 3500 3133.9 3137.4 3162.4 3211.3 MIN: 3035.49 / MAX: 3252.39 MIN: 3005.95 / MAX: 3263.19 MIN: 3046.43 / MAX: 3270.26 MIN: 3059.26 / MAX: 3335.58
Renaissance Test: Apache Spark Bayes OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Apache Spark Bayes B D A C 500 1000 1500 2000 2500 2104.4 2113.8 2126.2 2149.7 MIN: 1617.62 / MAX: 2358.18 MIN: 1613.04 MIN: 1633.81 / MAX: 2126.21 MIN: 1657.38
Renaissance Test: Savina Reactors.IO OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Savina Reactors.IO C B A D 2K 4K 6K 8K 10K 7858.7 7969.9 8321.1 8459.7 MAX: 11840.34 MIN: 7969.86 / MAX: 11204.55 MAX: 12754.85 MAX: 12201.65
Renaissance Test: Apache Spark PageRank OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Apache Spark PageRank C D B A 700 1400 2100 2800 3500 3116.0 3117.0 3153.2 3174.2 MIN: 2658.74 / MAX: 3170.73 MIN: 2793.69 / MAX: 3226.06 MIN: 2764.94 / MAX: 3198.77 MIN: 2810.59 / MAX: 3285.06
Renaissance Test: Finagle HTTP Requests OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Finagle HTTP Requests C B D A 800 1600 2400 3200 4000 3529.2 3571.9 3587.1 3814.1 MIN: 3302.49 / MAX: 3811.65 MIN: 3345.25 / MAX: 3795.91 MIN: 3364.31 / MAX: 3763.07 MIN: 3534.91 / MAX: 3897.61
Renaissance Test: In-Memory Database Shootout OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: In-Memory Database Shootout A B D C 800 1600 2400 3200 4000 3614.9 3646.0 3732.7 3824.8 MIN: 3226.39 / MAX: 3786.41 MIN: 3383.09 / MAX: 4041.23 MIN: 3478.56 / MAX: 4080.38 MIN: 3519.07 / MAX: 4246.27
Renaissance Test: Akka Unbalanced Cobwebbed Tree OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Akka Unbalanced Cobwebbed Tree A C B D 3K 6K 9K 12K 15K 12754.4 12759.6 12943.4 12947.6 MIN: 10167.69 / MAX: 12754.41 MIN: 10128.47 MIN: 10264.93 / MAX: 12943.41 MIN: 10307.6 / MAX: 12947.63
Renaissance Test: Genetic Algorithm Using Jenetics + Futures OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Genetic Algorithm Using Jenetics + Futures A C B D 600 1200 1800 2400 3000 2666.5 2871.2 2888.2 2903.9 MIN: 2472.42 / MAX: 2807.84 MIN: 2838.52 / MAX: 2898.82 MIN: 2858.54 / MAX: 2927.17 MIN: 2862.13 / MAX: 2943.57
oneDNN Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU B D C A 16 32 48 64 80 4.70466 4.70911 4.71020 73.17950 MIN: 4.61 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU D C B A 16 32 48 64 80 11.38 11.98 11.98 73.36 MIN: 11.24 MIN: 11.88 MIN: 11.88 MIN: 49.92 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU C B D A 13 26 39 52 65 1.79634 1.80376 1.81319 59.13020 MIN: 1.76 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU D C B A 8 16 24 32 40 0.924820 0.930888 0.933945 36.708900 MIN: 0.87 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU D C B A 8 16 24 32 40 22.50 22.59 22.59 36.96 MIN: 21.97 MIN: 22.18 MIN: 22.13 MIN: 22.25 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU B D C A 20 40 60 80 100 5.65439 7.65826 8.23900 92.13430 MIN: 5.12 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU B C D A 6 12 18 24 30 5.25702 5.26468 5.27803 23.21980 MIN: 5.18 MIN: 5.18 MIN: 5.19 MIN: 5.62 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU D B C A 9 18 27 36 45 24.85 24.91 24.96 41.19 MIN: 24.57 MIN: 24.56 MIN: 24.62 MIN: 27.53 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU D B C A 11 22 33 44 55 2.43000 2.44129 2.44352 47.94820 MIN: 12.07 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU C D B A 3 6 9 12 15 3.38355 3.40609 3.40615 12.03100 MIN: 3.26 MIN: 3.29 MIN: 3.27 MIN: 3.34 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU B D C A 7K 14K 21K 28K 35K 4089.48 4126.65 4133.63 32226.30 MIN: 17963.3 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU C D B A 6K 12K 18K 24K 30K 2458.19 2458.93 2485.34 25968.50 MIN: 12631 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU C B D A 7K 14K 21K 28K 35K 4092.33 4101.51 4110.74 33601.70 MIN: 21286.5 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU C D B A 7K 14K 21K 28K 35K 2485.86 2495.97 2496.51 30995.10 MIN: 14054.2 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU C B D A 7 14 21 28 35 1.16856 1.24528 1.29107 30.02690 MIN: 2.16 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU D B C A 8K 16K 24K 32K 40K 4100.65 4105.68 4117.24 35077.90 MIN: 21314.5 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU D B C A 6K 12K 18K 24K 30K 2473.22 2483.66 2492.88 28223.80 MIN: 12718 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU B D C A 8 16 24 32 40 1.85271 1.99646 2.25790 36.17280 MIN: 2.27 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
Glibc Benchmarks Benchmark: cos OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: cos B D C A 16 32 48 64 80 68.60 68.63 70.38 73.92 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: exp OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: exp C A D B 4 8 12 16 20 16.15 16.19 16.38 16.44 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: ffs OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: ffs B D C A 2 4 6 8 10 5.68026 5.69216 6.08636 6.09947 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: sin OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: sin D B C A 15 30 45 60 75 60.51 62.17 64.73 65.33 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: log2 OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: log2 B D C A 5 10 15 20 25 19.46 19.50 20.73 21.53 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: modf OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: modf B D C A 2 4 6 8 10 6.53862 6.54065 6.71077 7.09614 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: sinh OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: sinh B C D A 6 12 18 24 30 24.82 26.85 26.98 26.98 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: sqrt OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: sqrt B D C A 2 4 6 8 10 7.27802 7.28112 7.46367 7.86042 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: tanh OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: tanh B C D A 9 18 27 36 45 35.61 38.06 38.33 38.37 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: asinh OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: asinh D C A B 7 14 21 28 35 29.43 29.44 30.64 31.75 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: atanh OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: atanh C D B A 9 18 27 36 45 35.84 35.87 37.46 38.34 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: ffsll OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: ffsll D B C A 2 4 6 8 10 6.46463 6.46721 6.93142 7.04916 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: sincos OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: sincos C B D A 10 20 30 40 50 41.74 41.74 42.89 45.98 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: pthread_once OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: pthread_once C B D A 2 4 6 8 10 6.02658 6.05998 6.09718 6.23097 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
libavif avifenc Encoder Speed: 0 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 0 D C B A 30 60 90 120 150 128.40 128.78 129.53 134.16 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 2 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 2 D B C A 15 30 45 60 75 62.15 62.50 63.47 66.03 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 6 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 6 D B C A 2 4 6 8 10 7.619 7.654 7.718 8.089 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 6, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 6, Lossless C B D A 3 6 9 12 15 10.76 10.88 10.98 11.43 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 10, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 10, Lossless C D B A 1.3388 2.6776 4.0164 5.3552 6.694 5.601 5.644 5.648 5.950 1. (CXX) g++ options: -O3 -fPIC -lm
WebP2 Image Encode Encode Settings: Default OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Default B D A C 0.6757 1.3514 2.0271 2.7028 3.3785 2.886 2.896 2.972 3.003 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
WebP2 Image Encode Encode Settings: Quality 75, Compression Effort 7 OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Quality 75, Compression Effort 7 B D C A 30 60 90 120 150 146.32 147.24 148.84 151.78 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
WebP2 Image Encode Encode Settings: Quality 95, Compression Effort 7 OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Quality 95, Compression Effort 7 C B D A 70 140 210 280 350 308.71 309.66 310.50 323.97 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
WebP2 Image Encode Encode Settings: Quality 100, Compression Effort 5 OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Quality 100, Compression Effort 5 B C D A 0.9545 1.909 2.8635 3.818 4.7725 4.110 4.140 4.140 4.242 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
WebP2 Image Encode Encode Settings: Quality 100, Lossless Compression OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Quality 100, Lossless Compression D B C A 160 320 480 640 800 721.40 721.68 722.89 742.93 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
Phoronix Test Suite v10.8.5