june AMD Ryzen 9 3900XT 12-Core testing with a MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.B3 BIOS) and AMD Radeon RX 56/64 8GB on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2206054-PTS-JUNE759444&rdt&gru .
june Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution A B C D AMD Ryzen 9 3900XT 12-Core @ 3.80GHz (12 Cores / 24 Threads) MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.B3 BIOS) AMD Starship/Matisse 16GB 500GB Seagate FireCuda 520 SSD ZP500GM30002 AMD Radeon RX 56/64 8GB (1630/945MHz) AMD Vega 10 HDMI Audio ASUS MG28U Realtek Device 2600 + Realtek Killer E3000 2.5GbE + Intel Wi-Fi 6 AX200 Ubuntu 22.04 5.15.0-22-generic (x86_64) GNOME Shell 41.3 X Server + Wayland 4.6 Mesa 21.3.5 (LLVM 12.0.1) 1.2.195 GCC 11.2.0 ext4 3840x2160 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.42) 1.3.204 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - A: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XWYfV6/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XWYfV6/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - B: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - C: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - D: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8701021 Graphics Details - BAR1 / Visible vRAM Size: 256 MB - vBIOS Version: 113-D0500100-102 Java Details - OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1) Python Details - A: Python 3.10.2 - B: Python 3.10.4 - C: Python 3.10.4 - D: Python 3.10.4 Security Details - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full AMD retpoline IBPB: conditional STIBP: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
june stress-ng: MMAP stress-ng: NUMA stress-ng: Futex stress-ng: MEMFD stress-ng: Atomic stress-ng: Crypto stress-ng: Malloc stress-ng: Forking stress-ng: IO_uring stress-ng: SENDFILE stress-ng: CPU Cache stress-ng: CPU Stress stress-ng: Semaphores stress-ng: Matrix Math stress-ng: Vector Math stress-ng: Memory Copying stress-ng: Socket Activity stress-ng: Context Switching stress-ng: Glibc C String Functions stress-ng: Glibc Qsort Data Sorting stress-ng: System V Message Passing gravitymark: 1920 x 1080 - OpenGL gravitymark: 1920 x 1080 - Vulkan gravitymark: 2560 x 1440 - OpenGL gravitymark: 2560 x 1440 - Vulkan gravitymark: 3840 x 2160 - OpenGL gravitymark: 3840 x 2160 - Vulkan svt-av1: Preset 4 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 10 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 4 - Bosphorus 1080p svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 10 - Bosphorus 1080p svt-av1: Preset 12 - Bosphorus 1080p svt-hevc: 1 - Bosphorus 4K svt-hevc: 7 - Bosphorus 4K svt-hevc: 10 - Bosphorus 4K svt-hevc: 1 - Bosphorus 1080p svt-hevc: 7 - Bosphorus 1080p svt-hevc: 10 - Bosphorus 1080p svt-vp9: VMAF Optimized - Bosphorus 4K svt-vp9: VMAF Optimized - Bosphorus 1080p svt-vp9: PSNR/SSIM Optimized - Bosphorus 4K svt-vp9: PSNR/SSIM Optimized - Bosphorus 1080p svt-vp9: Visual Quality Optimized - Bosphorus 4K svt-vp9: Visual Quality Optimized - Bosphorus 1080p x264: Bosphorus 4K x264: Bosphorus 1080p simdjson: Kostya simdjson: TopTweet simdjson: LargeRand simdjson: PartialTweets simdjson: DistinctUserID perf-bench: Memcpy 1MB perf-bench: Memset 1MB onnx: GPT-2 - CPU - Parallel onnx: GPT-2 - CPU - Standard onnx: yolov4 - CPU - Parallel onnx: yolov4 - CPU - Standard onnx: bertsquad-12 - CPU - Parallel onnx: bertsquad-12 - CPU - Standard onnx: fcn-resnet101-11 - CPU - Parallel onnx: fcn-resnet101-11 - CPU - Standard onnx: ArcFace ResNet-100 - CPU - Parallel onnx: ArcFace ResNet-100 - CPU - Standard onnx: super-resolution-10 - CPU - Parallel onnx: super-resolution-10 - CPU - Standard nettle: aes256 nettle: chacha nettle: sha512 nettle: poly1305-aes etcpak: Multi-Threaded - DXT1 etcpak: Multi-Threaded - ETC2 etcpak: Single-Threaded - DXT1 etcpak: Single-Threaded - ETC2 gromacs: MPI CPU - water_GMX50_bare java-jmh: Throughput perf-bench: Epoll Wait perf-bench: Futex Hash perf-bench: Sched Pipe perf-bench: Futex Lock-Pi perf-bench: Syscall Basic influxdb: 4 - 10000 - 2,5000,1 - 10000 influxdb: 64 - 10000 - 2,5000,1 - 10000 influxdb: 1024 - 10000 - 2,5000,1 - 10000 tensorflow-lite: SqueezeNet tensorflow-lite: Inception V4 tensorflow-lite: NASNet Mobile tensorflow-lite: Mobilenet Float tensorflow-lite: Mobilenet Quant tensorflow-lite: Inception ResNet V2 renaissance: Scala Dotty renaissance: Rand Forest renaissance: ALS Movie Lens renaissance: Apache Spark ALS renaissance: Apache Spark Bayes renaissance: Savina Reactors.IO renaissance: Apache Spark PageRank renaissance: Finagle HTTP Requests renaissance: In-Memory Database Shootout renaissance: Akka Unbalanced Cobwebbed Tree renaissance: Genetic Algorithm Using Jenetics + Futures onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Recurrent Neural Network Inference - u8s8f32 - CPU onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU glibc-bench: cos glibc-bench: exp glibc-bench: ffs glibc-bench: sin glibc-bench: log2 glibc-bench: modf glibc-bench: sinh glibc-bench: sqrt glibc-bench: tanh glibc-bench: asinh glibc-bench: atanh glibc-bench: ffsll glibc-bench: sincos glibc-bench: pthread_once avifenc: 0 avifenc: 2 avifenc: 6 avifenc: 6, Lossless avifenc: 10, Lossless webp2: Default webp2: Quality 75, Compression Effort 7 webp2: Quality 95, Compression Effort 7 webp2: Quality 100, Compression Effort 5 webp2: Quality 100, Lossless Compression A B C D 296.27 211.02 2786611.95 850.42 576128.96 21561.58 16082432.37 52768.88 35619.31 219949.89 156.96 29249.94 2483545.87 58365.05 86846.17 4844.13 9558.26 4520622.35 1991624.53 188.3 7796050.7 107.2 105.3 91.1 90.6 65.7 65 2.467 30.777 72.339 95.65 6.229 116.763 223.218 352.206 2.82 42.2 75.56 11 151.75 274.85 44.54 179.51 43.96 180 45.49 168 35.58 137.76 2.93 4.68 1.01 3.83 4.59 14.624821 69.022774 4580 5775 244 365 432 547 60 59 1117 1119 4541 4435 6059.02 1068.17 637.04 3216.07 3072.047 3061.536 236.904 232.166 1.047 23214280399.043 42469 4693851 326839 542 17962305 909621 1236274.6 1293571.6 206917 2445920 2271630 240371 44820.6 2330410 778.7 743.4 12757.9 3211.3 2126.2 8321.1 3174.2 3814.1 3614.9 12754.4 2666.5 73.1795 73.3569 59.1302 36.7089 36.9585 92.1343 23.2198 41.1929 47.9482 12.031 32226.3 25968.5 33601.7 30995.1 30.0269 35077.9 28223.8 36.1728 73.9182 16.1858 6.09947 65.3259 21.5269 7.09614 26.9821 7.86042 38.3749 30.644 38.3398 7.04916 45.9757 6.23097 134.16 66.028 8.089 11.431 5.95 2.972 151.784 323.97 4.242 742.934 300.58 328.2 2871422.77 871.88 577614.81 22488.17 16996416.51 54543.49 34951.82 216302.59 152.09 30044.54 2481083.26 60985 90749.35 5038.57 9191.93 4833096.84 2026796.06 196.5 8003229.07 106.5 105.3 90.4 90.4 65.3 64.5 2.563 36.48 76.445 98.212 6.44 117.447 232.831 350.931 2.95 50.39 78.87 11.63 159.49 290.42 45.21 184.6 46.88 187.87 49.51 176.38 35.64 141.25 2.92 4.61 1.04 3.99 4.72 16.239476 72.014007 4858 6058 256 323 449 571 68 78 1165 1660 4619 4324 6131.07 1185.58 658.67 3212.21 3156.579 3146.367 237.857 246.144 1.138 24255951266.374 45429 4904887 340408 669 19410883 852735.1 1240324.6 1291849.9 2898.87 42001 15355.6 2411.06 3096.7 42072.8 833.9 721.2 12616.7 3133.9 2104.4 7969.9 3153.2 3571.9 3646.0 12943.4 2888.2 4.70466 11.9803 1.80376 0.933945 22.587 5.65439 5.25702 24.9072 2.44129 3.40615 4089.48 2485.34 4101.51 2496.51 1.24528 4105.68 2483.66 1.85271 68.5977 16.4404 5.68026 62.1677 19.4563 6.53862 24.8246 7.27802 35.6057 31.7456 37.458 6.46721 41.7435 6.05998 129.526 62.502 7.654 10.878 5.648 2.886 146.316 309.655 4.11 721.681 300.57 326.62 2787436.17 869.61 580078.48 22493.23 16871709.82 54932.41 34815.95 246261.31 158.82 29333.09 2485080.45 61156.12 90779.38 5039.59 9185.34 4842631.56 2080090.43 197.1 7995516.42 106.5 105.6 91.7 90.5 65.2 64.4 2.556 36.35 78.409 98.216 6.435 117.33 230.171 351.994 2.96 50.44 79.86 11.63 158.6 287.22 45.12 183.53 46.52 187.19 49.84 176.32 35.36 140.99 2.89 4.61 1.04 4 4.55 14.765638 73.955339 4859 6445 257 337 449 567 68 60 1163 1183 4586 4511 6541.11 1151.97 637.71 3217.7 3169.25 3144.303 249.284 246.805 1.14 24266478961.702 45264 4910773 351971 663 17850063 847482 1237035.8 1291195 2894.31 42840.4 15359.5 2435.6 3146.6 42165.8 729.0 730.9 12677.3 3137.4 2149.7 7858.7 3116.0 3529.2 3824.8 12759.6 2871.2 4.7102 11.9752 1.79634 0.930888 22.5856 8.239 5.26468 24.9595 2.44352 3.38355 4133.63 2458.19 4092.33 2485.86 1.16856 4117.24 2492.88 2.2579 70.3809 16.1542 6.08636 64.7313 20.7323 6.71077 26.847 7.46367 38.0593 29.4417 35.8417 6.93142 41.7416 6.02658 128.777 63.47 7.718 10.764 5.601 3.003 148.841 308.71 4.14 722.894 298.14 331.54 3025044.82 867.3 576825.66 22466.9 16827898.02 54797.96 35188.01 192604.94 156.92 29002.22 2481553.24 61125.26 90727.65 5048.06 9193.24 4808874.54 2087104.71 195.83 7971710.38 106.3 105.5 91.2 90.7 65.7 65 2.561 36.587 79.787 99.572 6.428 119.133 226.788 372.283 2.95 50.62 79.6 11.61 158.69 288.05 45.2 185.11 46.83 187.34 49.86 180.36 34.85 141 2.98 4.66 1.05 4 4.72 14.845633 72.413917 4795 5962 257 453 451 570 68 88 1165 1154 4730 4423 6515.5 1095 637.39 3234.76 3162.38 3154.65 238.625 247.108 1.155 24234601965.181 45400 4910651 344956 649 19384879 816036 1195550.6 1240380.9 2888.78 42166.7 15326 2401.33 3113.9 41956.7 818.3 751.7 12638.5 3162.4 2113.8 8459.7 3117.0 3587.1 3732.7 12947.6 2903.9 4.70911 11.3832 1.81319 0.92482 22.5007 7.65826 5.27803 24.8524 2.43 3.40609 4126.65 2458.93 4110.74 2495.97 1.29107 4100.65 2473.22 1.99646 68.63 16.381 5.69216 60.5126 19.5047 6.54065 26.9788 7.28112 38.3276 29.4292 35.869 6.46463 42.8935 6.09718 128.396 62.152 7.619 10.981 5.644 2.896 147.237 310.504 4.14 721.401 OpenBenchmarking.org
Stress-NG Test: MMAP OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: MMAP A B C D 70 140 210 280 350 296.27 300.58 300.57 298.14 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: NUMA OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: NUMA A B C D 70 140 210 280 350 211.02 328.20 326.62 331.54 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Futex OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Futex A B C D 600K 1200K 1800K 2400K 3000K 2786611.95 2871422.77 2787436.17 3025044.82 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: MEMFD OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: MEMFD A B C D 200 400 600 800 1000 850.42 871.88 869.61 867.30 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Atomic OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Atomic A B C D 120K 240K 360K 480K 600K 576128.96 577614.81 580078.48 576825.66 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Crypto OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Crypto A B C D 5K 10K 15K 20K 25K 21561.58 22488.17 22493.23 22466.90 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Malloc OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Malloc A B C D 4M 8M 12M 16M 20M 16082432.37 16996416.51 16871709.82 16827898.02 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Forking OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Forking A B C D 12K 24K 36K 48K 60K 52768.88 54543.49 54932.41 54797.96 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: IO_uring OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: IO_uring A B C D 8K 16K 24K 32K 40K 35619.31 34951.82 34815.95 35188.01 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: SENDFILE OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: SENDFILE A B C D 50K 100K 150K 200K 250K 219949.89 216302.59 246261.31 192604.94 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: CPU Cache OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: CPU Cache A B C D 40 80 120 160 200 156.96 152.09 158.82 156.92 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: CPU Stress OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: CPU Stress A B C D 6K 12K 18K 24K 30K 29249.94 30044.54 29333.09 29002.22 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Semaphores OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Semaphores A B C D 500K 1000K 1500K 2000K 2500K 2483545.87 2481083.26 2485080.45 2481553.24 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Matrix Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Matrix Math A B C D 13K 26K 39K 52K 65K 58365.05 60985.00 61156.12 61125.26 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Vector Math OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Vector Math A B C D 20K 40K 60K 80K 100K 86846.17 90749.35 90779.38 90727.65 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Memory Copying OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Memory Copying A B C D 1100 2200 3300 4400 5500 4844.13 5038.57 5039.59 5048.06 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Socket Activity OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Socket Activity A B C D 2K 4K 6K 8K 10K 9558.26 9191.93 9185.34 9193.24 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Context Switching OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Context Switching A B C D 1000K 2000K 3000K 4000K 5000K 4520622.35 4833096.84 4842631.56 4808874.54 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Glibc C String Functions OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Glibc C String Functions A B C D 400K 800K 1200K 1600K 2000K 1991624.53 2026796.06 2080090.43 2087104.71 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: Glibc Qsort Data Sorting OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: Glibc Qsort Data Sorting A B C D 40 80 120 160 200 188.30 196.50 197.10 195.83 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
Stress-NG Test: System V Message Passing OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14 Test: System V Message Passing A B C D 2M 4M 6M 8M 10M 7796050.70 8003229.07 7995516.42 7971710.38 1. (CC) gcc options: -O2 -std=gnu99 -lm -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lz -pthread
GravityMark Resolution: 1920 x 1080 - Renderer: OpenGL OpenBenchmarking.org Frames Per Second, More Is Better GravityMark 1.53 Resolution: 1920 x 1080 - Renderer: OpenGL A B C D 20 40 60 80 100 107.2 106.5 106.5 106.3
GravityMark Resolution: 1920 x 1080 - Renderer: Vulkan OpenBenchmarking.org Frames Per Second, More Is Better GravityMark 1.53 Resolution: 1920 x 1080 - Renderer: Vulkan A B C D 20 40 60 80 100 105.3 105.3 105.6 105.5
GravityMark Resolution: 2560 x 1440 - Renderer: OpenGL OpenBenchmarking.org Frames Per Second, More Is Better GravityMark 1.53 Resolution: 2560 x 1440 - Renderer: OpenGL A B C D 20 40 60 80 100 91.1 90.4 91.7 91.2
GravityMark Resolution: 2560 x 1440 - Renderer: Vulkan OpenBenchmarking.org Frames Per Second, More Is Better GravityMark 1.53 Resolution: 2560 x 1440 - Renderer: Vulkan A B C D 20 40 60 80 100 90.6 90.4 90.5 90.7
GravityMark Resolution: 3840 x 2160 - Renderer: OpenGL OpenBenchmarking.org Frames Per Second, More Is Better GravityMark 1.53 Resolution: 3840 x 2160 - Renderer: OpenGL A B C D 15 30 45 60 75 65.7 65.3 65.2 65.7
GravityMark Resolution: 3840 x 2160 - Renderer: Vulkan OpenBenchmarking.org Frames Per Second, More Is Better GravityMark 1.53 Resolution: 3840 x 2160 - Renderer: Vulkan A B C D 15 30 45 60 75 65.0 64.5 64.4 65.0
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.0 Encoder Mode: Preset 4 - Input: Bosphorus 4K A B C D 0.5767 1.1534 1.7301 2.3068 2.8835 2.467 2.563 2.556 2.561 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.0 Encoder Mode: Preset 8 - Input: Bosphorus 4K A B C D 8 16 24 32 40 30.78 36.48 36.35 36.59 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
SVT-AV1 Encoder Mode: Preset 10 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.0 Encoder Mode: Preset 10 - Input: Bosphorus 4K A B C D 20 40 60 80 100 72.34 76.45 78.41 79.79 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.0 Encoder Mode: Preset 12 - Input: Bosphorus 4K A B C D 20 40 60 80 100 95.65 98.21 98.22 99.57 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
SVT-AV1 Encoder Mode: Preset 4 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.0 Encoder Mode: Preset 4 - Input: Bosphorus 1080p A B C D 2 4 6 8 10 6.229 6.440 6.435 6.428 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.0 Encoder Mode: Preset 8 - Input: Bosphorus 1080p A B C D 30 60 90 120 150 116.76 117.45 117.33 119.13 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
SVT-AV1 Encoder Mode: Preset 10 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.0 Encoder Mode: Preset 10 - Input: Bosphorus 1080p A B C D 50 100 150 200 250 223.22 232.83 230.17 226.79 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
SVT-AV1 Encoder Mode: Preset 12 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.0 Encoder Mode: Preset 12 - Input: Bosphorus 1080p A B C D 80 160 240 320 400 352.21 350.93 351.99 372.28 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq -pie
SVT-HEVC Tuning: 1 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 1 - Input: Bosphorus 4K A B C D 0.666 1.332 1.998 2.664 3.33 2.82 2.95 2.96 2.95 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
SVT-HEVC Tuning: 7 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 7 - Input: Bosphorus 4K A B C D 11 22 33 44 55 42.20 50.39 50.44 50.62 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
SVT-HEVC Tuning: 10 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 10 - Input: Bosphorus 4K A B C D 20 40 60 80 100 75.56 78.87 79.86 79.60 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
SVT-HEVC Tuning: 1 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 1 - Input: Bosphorus 1080p A B C D 3 6 9 12 15 11.00 11.63 11.63 11.61 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
SVT-HEVC Tuning: 7 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 7 - Input: Bosphorus 1080p A B C D 40 80 120 160 200 151.75 159.49 158.60 158.69 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
SVT-HEVC Tuning: 10 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-HEVC 1.5.0 Tuning: 10 - Input: Bosphorus 1080p A B C D 60 120 180 240 300 274.85 290.42 287.22 288.05 1. (CC) gcc options: -fPIE -fPIC -O3 -O2 -pie -rdynamic -lpthread -lrt
SVT-VP9 Tuning: VMAF Optimized - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: VMAF Optimized - Input: Bosphorus 4K A B C D 10 20 30 40 50 44.54 45.21 45.12 45.20 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
SVT-VP9 Tuning: VMAF Optimized - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: VMAF Optimized - Input: Bosphorus 1080p A B C D 40 80 120 160 200 179.51 184.60 183.53 185.11 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
SVT-VP9 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 4K A B C D 11 22 33 44 55 43.96 46.88 46.52 46.83 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
SVT-VP9 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: PSNR/SSIM Optimized - Input: Bosphorus 1080p A B C D 40 80 120 160 200 180.00 187.87 187.19 187.34 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
SVT-VP9 Tuning: Visual Quality Optimized - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: Visual Quality Optimized - Input: Bosphorus 4K A B C D 11 22 33 44 55 45.49 49.51 49.84 49.86 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
SVT-VP9 Tuning: Visual Quality Optimized - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-VP9 0.3 Tuning: Visual Quality Optimized - Input: Bosphorus 1080p A B C D 40 80 120 160 200 168.00 176.38 176.32 180.36 1. (CC) gcc options: -O3 -fcommon -fPIE -fPIC -fvisibility=hidden -pie -rdynamic -lpthread -lrt -lm
x264 Video Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better x264 2022-02-22 Video Input: Bosphorus 4K A B C D 8 16 24 32 40 35.58 35.64 35.36 34.85 1. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -flto
x264 Video Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better x264 2022-02-22 Video Input: Bosphorus 1080p A B C D 30 60 90 120 150 137.76 141.25 140.99 141.00 1. (CC) gcc options: -ldl -m64 -lm -lpthread -O3 -flto
simdjson Throughput Test: Kostya OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: Kostya A B C D 0.6705 1.341 2.0115 2.682 3.3525 2.93 2.92 2.89 2.98 1. (CXX) g++ options: -O3
simdjson Throughput Test: TopTweet OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: TopTweet A B C D 1.053 2.106 3.159 4.212 5.265 4.68 4.61 4.61 4.66 1. (CXX) g++ options: -O3
simdjson Throughput Test: LargeRandom OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: LargeRandom A B C D 0.2363 0.4726 0.7089 0.9452 1.1815 1.01 1.04 1.04 1.05 1. (CXX) g++ options: -O3
simdjson Throughput Test: PartialTweets OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: PartialTweets A B C D 0.9 1.8 2.7 3.6 4.5 3.83 3.99 4.00 4.00 1. (CXX) g++ options: -O3
simdjson Throughput Test: DistinctUserID OpenBenchmarking.org GB/s, More Is Better simdjson 2.0 Throughput Test: DistinctUserID A B C D 1.062 2.124 3.186 4.248 5.31 4.59 4.72 4.55 4.72 1. (CXX) g++ options: -O3
perf-bench Benchmark: Memcpy 1MB OpenBenchmarking.org GB/sec, More Is Better perf-bench Benchmark: Memcpy 1MB A B C D 4 8 12 16 20 14.62 16.24 14.77 14.85 1. (CC) gcc options: -pthread -shared -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma
perf-bench Benchmark: Memset 1MB OpenBenchmarking.org GB/sec, More Is Better perf-bench Benchmark: Memset 1MB A B C D 16 32 48 64 80 69.02 72.01 73.96 72.41 1. (CC) gcc options: -pthread -shared -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma
ONNX Runtime Model: GPT-2 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Parallel A B C D 1000 2000 3000 4000 5000 4580 4858 4859 4795 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: GPT-2 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: GPT-2 - Device: CPU - Executor: Standard A B C D 1400 2800 4200 5600 7000 5775 6058 6445 5962 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: yolov4 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: yolov4 - Device: CPU - Executor: Parallel A B C D 60 120 180 240 300 244 256 257 257 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: yolov4 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: yolov4 - Device: CPU - Executor: Standard A B C D 100 200 300 400 500 365 323 337 453 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: bertsquad-12 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Parallel A B C D 100 200 300 400 500 432 449 449 451 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: bertsquad-12 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: bertsquad-12 - Device: CPU - Executor: Standard A B C D 120 240 360 480 600 547 571 567 570 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel A B C D 15 30 45 60 75 60 68 68 68 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: fcn-resnet101-11 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: fcn-resnet101-11 - Device: CPU - Executor: Standard A B C D 20 40 60 80 100 59 78 60 88 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel A B C D 300 600 900 1200 1500 1117 1165 1163 1165 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard A B C D 400 800 1200 1600 2000 1119 1660 1183 1154 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: super-resolution-10 - Device: CPU - Executor: Parallel OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Parallel A B C D 1000 2000 3000 4000 5000 4541 4619 4586 4730 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
ONNX Runtime Model: super-resolution-10 - Device: CPU - Executor: Standard OpenBenchmarking.org Inferences Per Minute, More Is Better ONNX Runtime 1.11 Model: super-resolution-10 - Device: CPU - Executor: Standard A B C D 1000 2000 3000 4000 5000 4435 4324 4511 4423 1. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt
Nettle Test: aes256 OpenBenchmarking.org Mbyte/s, More Is Better Nettle 3.8 Test: aes256 A B C D 1400 2800 4200 5600 7000 6059.02 6131.07 6541.11 6515.50 MIN: 4394.53 / MAX: 9359.29 MIN: 4421.4 / MAX: 9534.58 MIN: 4692.01 / MAX: 10192.6 MIN: 4701.43 / MAX: 10170.91 1. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto
Nettle Test: chacha OpenBenchmarking.org Mbyte/s, More Is Better Nettle 3.8 Test: chacha A B C D 300 600 900 1200 1500 1068.17 1185.58 1151.97 1095.00 MIN: 514.62 / MAX: 3098.54 MIN: 574.66 / MAX: 3420.69 MIN: 558.73 / MAX: 3316.95 MIN: 529.55 / MAX: 3168.56 1. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto
Nettle Test: sha512 OpenBenchmarking.org Mbyte/s, More Is Better Nettle 3.8 Test: sha512 A B C D 140 280 420 560 700 637.04 658.67 637.71 637.39 1. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto
Nettle Test: poly1305-aes OpenBenchmarking.org Mbyte/s, More Is Better Nettle 3.8 Test: poly1305-aes A B C D 700 1400 2100 2800 3500 3216.07 3212.21 3217.70 3234.76 1. (CC) gcc options: -O2 -ggdb3 -lnettle -lm -lcrypto
Etcpak Benchmark: Multi-Threaded - Configuration: DXT1 OpenBenchmarking.org Mpx/s, More Is Better Etcpak 1.0 Benchmark: Multi-Threaded - Configuration: DXT1 A B C D 700 1400 2100 2800 3500 3072.05 3156.58 3169.25 3162.38 1. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
Etcpak Benchmark: Multi-Threaded - Configuration: ETC2 OpenBenchmarking.org Mpx/s, More Is Better Etcpak 1.0 Benchmark: Multi-Threaded - Configuration: ETC2 A B C D 700 1400 2100 2800 3500 3061.54 3146.37 3144.30 3154.65 1. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
Etcpak Benchmark: Single-Threaded - Configuration: DXT1 OpenBenchmarking.org Mpx/s, More Is Better Etcpak 1.0 Benchmark: Single-Threaded - Configuration: DXT1 A B C D 50 100 150 200 250 236.90 237.86 249.28 238.63 1. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
Etcpak Benchmark: Single-Threaded - Configuration: ETC2 OpenBenchmarking.org Mpx/s, More Is Better Etcpak 1.0 Benchmark: Single-Threaded - Configuration: ETC2 A B C D 50 100 150 200 250 232.17 246.14 246.81 247.11 1. (CXX) g++ options: -O3 -march=native -std=c++11 -lpthread
GROMACS Implementation: MPI CPU - Input: water_GMX50_bare OpenBenchmarking.org Ns Per Day, More Is Better GROMACS 2022.1 Implementation: MPI CPU - Input: water_GMX50_bare A B C D 0.2599 0.5198 0.7797 1.0396 1.2995 1.047 1.138 1.140 1.155 1. (CXX) g++ options: -O3
Java JMH Throughput OpenBenchmarking.org Ops/s, More Is Better Java JMH Throughput A B C D 5000M 10000M 15000M 20000M 25000M 23214280399.04 24255951266.37 24266478961.70 24234601965.18
perf-bench Benchmark: Epoll Wait OpenBenchmarking.org ops/sec, More Is Better perf-bench Benchmark: Epoll Wait A B C D 10K 20K 30K 40K 50K 42469 45429 45264 45400 1. (CC) gcc options: -pthread -shared -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma
perf-bench Benchmark: Futex Hash OpenBenchmarking.org ops/sec, More Is Better perf-bench Benchmark: Futex Hash A B C D 1.1M 2.2M 3.3M 4.4M 5.5M 4693851 4904887 4910773 4910651 1. (CC) gcc options: -pthread -shared -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma
perf-bench Benchmark: Sched Pipe OpenBenchmarking.org ops/sec, More Is Better perf-bench Benchmark: Sched Pipe A B C D 80K 160K 240K 320K 400K 326839 340408 351971 344956 1. (CC) gcc options: -pthread -shared -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma
perf-bench Benchmark: Futex Lock-Pi OpenBenchmarking.org ops/sec, More Is Better perf-bench Benchmark: Futex Lock-Pi A B C D 140 280 420 560 700 542 669 663 649 1. (CC) gcc options: -pthread -shared -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma
perf-bench Benchmark: Syscall Basic OpenBenchmarking.org ops/sec, More Is Better perf-bench Benchmark: Syscall Basic A B C D 4M 8M 12M 16M 20M 17962305 19410883 17850063 19384879 1. (CC) gcc options: -pthread -shared -Xlinker -O6 -ggdb3 -funwind-tables -std=gnu99 -lnuma
InfluxDB Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 4 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 A B C D 200K 400K 600K 800K 1000K 909621.0 852735.1 847482.0 816036.0
InfluxDB Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 64 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 A B C D 300K 600K 900K 1200K 1500K 1236274.6 1240324.6 1237035.8 1195550.6
InfluxDB Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 OpenBenchmarking.org val/sec, More Is Better InfluxDB 1.8.2 Concurrent Streams: 1024 - Batch Size: 10000 - Tags: 2,5000,1 - Points Per Series: 10000 A B C D 300K 600K 900K 1200K 1500K 1293571.6 1291849.9 1291195.0 1240380.9
TensorFlow Lite Model: SqueezeNet OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: SqueezeNet A B C D 40K 80K 120K 160K 200K 206917.00 2898.87 2894.31 2888.78
TensorFlow Lite Model: Inception V4 OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Inception V4 A B C D 500K 1000K 1500K 2000K 2500K 2445920.0 42001.0 42840.4 42166.7
TensorFlow Lite Model: NASNet Mobile OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: NASNet Mobile A B C D 500K 1000K 1500K 2000K 2500K 2271630.0 15355.6 15359.5 15326.0
TensorFlow Lite Model: Mobilenet Float OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Mobilenet Float A B C D 50K 100K 150K 200K 250K 240371.00 2411.06 2435.60 2401.33
TensorFlow Lite Model: Mobilenet Quant OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Mobilenet Quant A B C D 10K 20K 30K 40K 50K 44820.6 3096.7 3146.6 3113.9
TensorFlow Lite Model: Inception ResNet V2 OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2022-05-18 Model: Inception ResNet V2 A B C D 500K 1000K 1500K 2000K 2500K 2330410.0 42072.8 42165.8 41956.7
Renaissance Test: Scala Dotty OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Scala Dotty A B C D 200 400 600 800 1000 778.7 833.9 729.0 818.3 MIN: 646.98 / MAX: 1439.03 MIN: 636.78 / MAX: 1324.63 MIN: 618.39 / MAX: 1281.64 MIN: 618.81 / MAX: 1331.73
Renaissance Test: Random Forest OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Random Forest A B C D 160 320 480 640 800 743.4 721.2 730.9 751.7 MIN: 665.12 / MAX: 887.13 MIN: 609.48 / MAX: 834.11 MIN: 607.63 / MAX: 901.62 MIN: 622.63 / MAX: 907.43
Renaissance Test: ALS Movie Lens OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: ALS Movie Lens A B C D 3K 6K 9K 12K 15K 12757.9 12616.7 12677.3 12638.5 MIN: 12757.86 / MAX: 14048.24 MIN: 12616.69 / MAX: 13750.37 MIN: 12677.26 / MAX: 13923.11 MAX: 13822.5
Renaissance Test: Apache Spark ALS OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Apache Spark ALS A B C D 700 1400 2100 2800 3500 3211.3 3133.9 3137.4 3162.4 MIN: 3059.26 / MAX: 3335.58 MIN: 3035.49 / MAX: 3252.39 MIN: 3005.95 / MAX: 3263.19 MIN: 3046.43 / MAX: 3270.26
Renaissance Test: Apache Spark Bayes OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Apache Spark Bayes A B C D 500 1000 1500 2000 2500 2126.2 2104.4 2149.7 2113.8 MIN: 1633.81 / MAX: 2126.21 MIN: 1617.62 / MAX: 2358.18 MIN: 1657.38 MIN: 1613.04
Renaissance Test: Savina Reactors.IO OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Savina Reactors.IO A B C D 2K 4K 6K 8K 10K 8321.1 7969.9 7858.7 8459.7 MAX: 12754.85 MIN: 7969.86 / MAX: 11204.55 MAX: 11840.34 MAX: 12201.65
Renaissance Test: Apache Spark PageRank OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Apache Spark PageRank A B C D 700 1400 2100 2800 3500 3174.2 3153.2 3116.0 3117.0 MIN: 2810.59 / MAX: 3285.06 MIN: 2764.94 / MAX: 3198.77 MIN: 2658.74 / MAX: 3170.73 MIN: 2793.69 / MAX: 3226.06
Renaissance Test: Finagle HTTP Requests OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Finagle HTTP Requests A B C D 800 1600 2400 3200 4000 3814.1 3571.9 3529.2 3587.1 MIN: 3534.91 / MAX: 3897.61 MIN: 3345.25 / MAX: 3795.91 MIN: 3302.49 / MAX: 3811.65 MIN: 3364.31 / MAX: 3763.07
Renaissance Test: In-Memory Database Shootout OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: In-Memory Database Shootout A B C D 800 1600 2400 3200 4000 3614.9 3646.0 3824.8 3732.7 MIN: 3226.39 / MAX: 3786.41 MIN: 3383.09 / MAX: 4041.23 MIN: 3519.07 / MAX: 4246.27 MIN: 3478.56 / MAX: 4080.38
Renaissance Test: Akka Unbalanced Cobwebbed Tree OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Akka Unbalanced Cobwebbed Tree A B C D 3K 6K 9K 12K 15K 12754.4 12943.4 12759.6 12947.6 MIN: 10167.69 / MAX: 12754.41 MIN: 10264.93 / MAX: 12943.41 MIN: 10128.47 MIN: 10307.6 / MAX: 12947.63
Renaissance Test: Genetic Algorithm Using Jenetics + Futures OpenBenchmarking.org ms, Fewer Is Better Renaissance 0.14 Test: Genetic Algorithm Using Jenetics + Futures A B C D 600 1200 1800 2400 3000 2666.5 2888.2 2871.2 2903.9 MIN: 2472.42 / MAX: 2807.84 MIN: 2858.54 / MAX: 2927.17 MIN: 2838.52 / MAX: 2898.82 MIN: 2862.13 / MAX: 2943.57
oneDNN Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU A B C D 16 32 48 64 80 73.17950 4.70466 4.71020 4.70911 MIN: 4.61 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU A B C D 16 32 48 64 80 73.36 11.98 11.98 11.38 MIN: 49.92 MIN: 11.88 MIN: 11.88 MIN: 11.24 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU A B C D 13 26 39 52 65 59.13020 1.80376 1.79634 1.81319 MIN: 1.76 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU A B C D 8 16 24 32 40 36.708900 0.933945 0.930888 0.924820 MIN: 0.87 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU A B C D 8 16 24 32 40 36.96 22.59 22.59 22.50 MIN: 22.25 MIN: 22.13 MIN: 22.18 MIN: 21.97 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU A B C D 20 40 60 80 100 92.13430 5.65439 8.23900 7.65826 MIN: 5.12 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU A B C D 6 12 18 24 30 23.21980 5.25702 5.26468 5.27803 MIN: 5.62 MIN: 5.18 MIN: 5.18 MIN: 5.19 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU A B C D 9 18 27 36 45 41.19 24.91 24.96 24.85 MIN: 27.53 MIN: 24.56 MIN: 24.62 MIN: 24.57 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU A B C D 11 22 33 44 55 47.94820 2.44129 2.44352 2.43000 MIN: 12.07 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU A B C D 3 6 9 12 15 12.03100 3.40615 3.38355 3.40609 MIN: 3.34 MIN: 3.27 MIN: 3.26 MIN: 3.29 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU A B C D 7K 14K 21K 28K 35K 32226.30 4089.48 4133.63 4126.65 MIN: 17963.3 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU A B C D 6K 12K 18K 24K 30K 25968.50 2485.34 2458.19 2458.93 MIN: 12631 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU A B C D 7K 14K 21K 28K 35K 33601.70 4101.51 4092.33 4110.74 MIN: 21286.5 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU A B C D 7K 14K 21K 28K 35K 30995.10 2496.51 2485.86 2495.97 MIN: 14054.2 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU A B C D 7 14 21 28 35 30.02690 1.24528 1.16856 1.29107 MIN: 2.16 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU A B C D 8K 16K 24K 32K 40K 35077.90 4105.68 4117.24 4100.65 MIN: 21314.5 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU A B C D 6K 12K 18K 24K 30K 28223.80 2483.66 2492.88 2473.22 MIN: 12718 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.6 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU A B C D 8 16 24 32 40 36.17280 1.85271 2.25790 1.99646 MIN: 2.27 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl
Glibc Benchmarks Benchmark: cos OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: cos A B C D 16 32 48 64 80 73.92 68.60 70.38 68.63 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: exp OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: exp A B C D 4 8 12 16 20 16.19 16.44 16.15 16.38 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: ffs OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: ffs A B C D 2 4 6 8 10 6.09947 5.68026 6.08636 5.69216 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: sin OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: sin A B C D 15 30 45 60 75 65.33 62.17 64.73 60.51 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: log2 OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: log2 A B C D 5 10 15 20 25 21.53 19.46 20.73 19.50 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: modf OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: modf A B C D 2 4 6 8 10 7.09614 6.53862 6.71077 6.54065 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: sinh OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: sinh A B C D 6 12 18 24 30 26.98 24.82 26.85 26.98 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: sqrt OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: sqrt A B C D 2 4 6 8 10 7.86042 7.27802 7.46367 7.28112 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: tanh OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: tanh A B C D 9 18 27 36 45 38.37 35.61 38.06 38.33 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: asinh OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: asinh A B C D 7 14 21 28 35 30.64 31.75 29.44 29.43 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: atanh OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: atanh A B C D 9 18 27 36 45 38.34 37.46 35.84 35.87 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: ffsll OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: ffsll A B C D 2 4 6 8 10 7.04916 6.46721 6.93142 6.46463 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: sincos OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: sincos A B C D 10 20 30 40 50 45.98 41.74 41.74 42.89 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
Glibc Benchmarks Benchmark: pthread_once OpenBenchmarking.org ns, Fewer Is Better Glibc Benchmarks Benchmark: pthread_once A B C D 2 4 6 8 10 6.23097 6.05998 6.02658 6.09718 1. (CC) gcc options: -pie -nostdlib -nostartfiles -lgcc -lgcc_s
libavif avifenc Encoder Speed: 0 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 0 A B C D 30 60 90 120 150 134.16 129.53 128.78 128.40 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 2 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 2 A B C D 15 30 45 60 75 66.03 62.50 63.47 62.15 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 6 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 6 A B C D 2 4 6 8 10 8.089 7.654 7.718 7.619 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 6, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 6, Lossless A B C D 3 6 9 12 15 11.43 10.88 10.76 10.98 1. (CXX) g++ options: -O3 -fPIC -lm
libavif avifenc Encoder Speed: 10, Lossless OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.10 Encoder Speed: 10, Lossless A B C D 1.3388 2.6776 4.0164 5.3552 6.694 5.950 5.648 5.601 5.644 1. (CXX) g++ options: -O3 -fPIC -lm
WebP2 Image Encode Encode Settings: Default OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Default A B C D 0.6757 1.3514 2.0271 2.7028 3.3785 2.972 2.886 3.003 2.896 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
WebP2 Image Encode Encode Settings: Quality 75, Compression Effort 7 OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Quality 75, Compression Effort 7 A B C D 30 60 90 120 150 151.78 146.32 148.84 147.24 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
WebP2 Image Encode Encode Settings: Quality 95, Compression Effort 7 OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Quality 95, Compression Effort 7 A B C D 70 140 210 280 350 323.97 309.66 308.71 310.50 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
WebP2 Image Encode Encode Settings: Quality 100, Compression Effort 5 OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Quality 100, Compression Effort 5 A B C D 0.9545 1.909 2.8635 3.818 4.7725 4.242 4.110 4.140 4.140 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
WebP2 Image Encode Encode Settings: Quality 100, Lossless Compression OpenBenchmarking.org Seconds, Fewer Is Better WebP2 Image Encode 20220422 Encode Settings: Quality 100, Lossless Compression A B C D 160 320 480 640 800 742.93 721.68 722.89 721.40 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
Phoronix Test Suite v10.8.5