eke[ Tests for a future article. Intel Core i7-1280P testing with a MSI Prestige 14Evo A12M MS-14C6 (E14C6IMS.115 BIOS) and MSI Intel ADL GT2 8GB on Ubuntu 24.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2409223-NE-EKE61814072&grw&rdt .
eke[ Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL Compiler File-System Screen Resolution a b Intel Core i7-1280P @ 4.70GHz (14 Cores / 20 Threads) MSI Prestige 14Evo A12M MS-14C6 (E14C6IMS.115 BIOS) Intel Alder Lake PCH 8 x 2GB LPDDR4-4267MT/s SK Hynix H9HCNNNCPMMLXR- 1024GB Micron_3400_MTFDKBA1T0TFH MSI Intel ADL GT2 8GB Realtek ALC274 Intel Alder Lake-P PCH CNVi WiFi Ubuntu 24.04 6.10.0-061000rc4daily20240621-generic (x86_64) GNOME Shell 46.0 X Server + Wayland 4.6 Mesa 24.2~git2407050600.9a3172~oibaf~n (git-9a3172e 2024-07-05 noble-oibaf-ppa) GCC 13.2.0 ext4 1920x1080 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x433 - Thermald 2.5.6 Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Mitigation of Clear Register File + retbleed: Not affected + spec_rstack_overflow: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; RSB filling; PBRSB-eIBRS: SW sequence; BHI: BHI_DIS_S + srbds: Not affected + tsx_async_abort: Not affected Java Details - b: OpenJDK Runtime Environment (build 21.0.4+7-Ubuntu-1ubuntu224.04)
eke[ encode-opus: WAV To Opus Encode etcpak: Multi-Threaded - ETC2 whisperfile: Tiny whisperfile: Small mnn: nasnet mnn: mobilenetV3 mnn: squeezenetv1.1 mnn: resnet-v2-50 mnn: SqueezeNetV1.0 mnn: MobileNetV2_224 mnn: mobilenet-v1-1.0 mnn: inception-v3 xnnpack: FP32MobileNetV2 xnnpack: FP32MobileNetV3Large xnnpack: FP32MobileNetV3Small xnnpack: FP16MobileNetV2 xnnpack: FP16MobileNetV3Large xnnpack: FP16MobileNetV3Small xnnpack: QU8MobileNetV2 xnnpack: QU8MobileNetV3Large xnnpack: QU8MobileNetV3Small svt-av1: Preset 3 - Bosphorus 4K svt-av1: Preset 5 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K y-cruncher: 1B y-cruncher: 500M stockfish: Chess Benchmark svt-av1: Preset 13 - Bosphorus 4K svt-av1: Preset 3 - Bosphorus 1080p svt-av1: Preset 5 - Bosphorus 1080p svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 13 - Bosphorus 1080p svt-av1: Preset 3 - Beauty 4K 10-bit svt-av1: Preset 5 - Beauty 4K 10-bit svt-av1: Preset 8 - Beauty 4K 10-bit svt-av1: Preset 13 - Beauty 4K 10-bit build2: Time To Compile valkey: GET - 50 valkey: SET - 50 valkey: GET - 500 valkey: GET - 800 valkey: SET - 500 valkey: SET - 800 cassandra: Writes simdjson: Kostya simdjson: TopTweet simdjson: LargeRand simdjson: PartialTweets simdjson: DistinctUserID byte: Pipe byte: Dhrystone 2 byte: System Call byte: Whetstone Double a b 21.329 203.996 99.42848 509.19053 17.5 2.415 4.535 30.095 7.894 3.144 5.596 50.285 3395 3137 1380 3590 3411 1403 2604 2632 1376 1.915 7.497 16.627 84.134 35.456 6479687 72.663 6.938 25.35 58.602 406.504 0.366 1.732 2.462 5.156 355.308 1233846.62 284911.09 483202.47 471725.41 277074.75 277457.94 83738 4.34 7.09 1.56 6.14 6.53 19538264.1 443801958.8 21494117.9 106770.1 21.321 203.018 97.19358 507.17441 17.834 1.532 3.702 28.717 7.51 3.217 5.25 50.581 2964 3141 1280 3738 3581 1464 2739 2712 1260 1.914 7.535 16.697 83.298 34.872 9304937 73.078 6.926 25.373 58.649 408.507 0.367 1.726 2.454 5.158 355.825 639567.12 286367 479421 633384.94 277556.88 274828.56 77291 4.29 7.08 1.58 6.71 7.12 19506864.5 446018931.8 21479957.2 110127.3 OpenBenchmarking.org
Opus Codec Encoding WAV To Opus Encode OpenBenchmarking.org Seconds, Fewer Is Better Opus Codec Encoding 1.5.2 WAV To Opus Encode a b 5 10 15 20 25 21.33 21.32 1. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm
Etcpak Benchmark: Multi-Threaded - Configuration: ETC2 OpenBenchmarking.org Mpx/s, More Is Better Etcpak 2.0 Benchmark: Multi-Threaded - Configuration: ETC2 a b 40 80 120 160 200 204.00 203.02 1. (CXX) g++ options: -flto -pthread
Whisperfile Model Size: Tiny OpenBenchmarking.org Seconds, Fewer Is Better Whisperfile 20Aug24 Model Size: Tiny a b 20 40 60 80 100 99.43 97.19
Whisperfile Model Size: Small OpenBenchmarking.org Seconds, Fewer Is Better Whisperfile 20Aug24 Model Size: Small a b 110 220 330 440 550 509.19 507.17
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: nasnet a b 4 8 12 16 20 17.50 17.83 MIN: 17.27 / MAX: 32.57 MIN: 17.38 / MAX: 35.2 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: mobilenetV3 a b 0.5434 1.0868 1.6302 2.1736 2.717 2.415 1.532 MIN: 2.17 / MAX: 6.21 MIN: 1.46 / MAX: 3.9 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: squeezenetv1.1 a b 1.0204 2.0408 3.0612 4.0816 5.102 4.535 3.702 MIN: 4.43 / MAX: 19.11 MIN: 2.82 / MAX: 9.52 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: resnet-v2-50 a b 7 14 21 28 35 30.10 28.72 MIN: 29.42 / MAX: 44.5 MIN: 27.75 / MAX: 43.25 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: SqueezeNetV1.0 a b 2 4 6 8 10 7.894 7.510 MIN: 7.61 / MAX: 13.4 MIN: 7.27 / MAX: 21.28 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: MobileNetV2_224 a b 0.7238 1.4476 2.1714 2.8952 3.619 3.144 3.217 MIN: 3.04 / MAX: 8.78 MIN: 3.15 / MAX: 4.28 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: mobilenet-v1-1.0 a b 1.2591 2.5182 3.7773 5.0364 6.2955 5.596 5.250 MIN: 5.44 / MAX: 19.51 MIN: 5.02 / MAX: 16.15 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: inception-v3 a b 11 22 33 44 55 50.29 50.58 MIN: 48.87 / MAX: 81.66 MIN: 48.86 / MAX: 65.55 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
XNNPACK Model: FP32MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV2 a b 700 1400 2100 2800 3500 3395 2964 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV3Large a b 700 1400 2100 2800 3500 3137 3141 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP32MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV3Small a b 300 600 900 1200 1500 1380 1280 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV2 a b 800 1600 2400 3200 4000 3590 3738 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV3Large a b 800 1600 2400 3200 4000 3411 3581 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: FP16MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV3Small a b 300 600 900 1200 1500 1403 1464 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: QU8MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV2 a b 600 1200 1800 2400 3000 2604 2739 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: QU8MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV3Large a b 600 1200 1800 2400 3000 2632 2712 1. (CXX) g++ options: -O3 -lrt -lm
XNNPACK Model: QU8MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV3Small a b 300 600 900 1200 1500 1376 1260 1. (CXX) g++ options: -O3 -lrt -lm
SVT-AV1 Encoder Mode: Preset 3 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Bosphorus 4K a b 0.4309 0.8618 1.2927 1.7236 2.1545 1.915 1.914 1. (CXX) g++ options: -march=native -mno-avx
SVT-AV1 Encoder Mode: Preset 5 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Bosphorus 4K a b 2 4 6 8 10 7.497 7.535 1. (CXX) g++ options: -march=native -mno-avx
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Bosphorus 4K a b 4 8 12 16 20 16.63 16.70 1. (CXX) g++ options: -march=native -mno-avx
Y-Cruncher Pi Digits To Calculate: 1B OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.5 Pi Digits To Calculate: 1B a b 20 40 60 80 100 84.13 83.30
Y-Cruncher Pi Digits To Calculate: 500M OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.5 Pi Digits To Calculate: 500M a b 8 16 24 32 40 35.46 34.87
Stockfish Chess Benchmark OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish 17 Chess Benchmark a b 2M 4M 6M 8M 10M 6479687 9304937 1. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -funroll-loops -msse -msse3 -mpopcnt -mavx2 -mbmi -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto-partition=one -flto=jobserver
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Bosphorus 4K a b 16 32 48 64 80 72.66 73.08 1. (CXX) g++ options: -march=native -mno-avx
SVT-AV1 Encoder Mode: Preset 3 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Bosphorus 1080p a b 2 4 6 8 10 6.938 6.926 1. (CXX) g++ options: -march=native -mno-avx
SVT-AV1 Encoder Mode: Preset 5 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Bosphorus 1080p a b 6 12 18 24 30 25.35 25.37 1. (CXX) g++ options: -march=native -mno-avx
SVT-AV1 Encoder Mode: Preset 8 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Bosphorus 1080p a b 13 26 39 52 65 58.60 58.65 1. (CXX) g++ options: -march=native -mno-avx
SVT-AV1 Encoder Mode: Preset 13 - Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Bosphorus 1080p a b 90 180 270 360 450 406.50 408.51 1. (CXX) g++ options: -march=native -mno-avx
SVT-AV1 Encoder Mode: Preset 3 - Input: Beauty 4K 10-bit OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.2 Encoder Mode: Preset 3 - Input: Beauty 4K 10-bit a b 0.0826 0.1652 0.2478 0.3304 0.413 0.366 0.367 1. (CXX) g++ options: -march=native -mno-avx
SVT-AV1 Encoder Mode: Preset 5 - Input: Beauty 4K 10-bit OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.2 Encoder Mode: Preset 5 - Input: Beauty 4K 10-bit a b 0.3897 0.7794 1.1691 1.5588 1.9485 1.732 1.726 1. (CXX) g++ options: -march=native -mno-avx
SVT-AV1 Encoder Mode: Preset 8 - Input: Beauty 4K 10-bit OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.2 Encoder Mode: Preset 8 - Input: Beauty 4K 10-bit a b 0.554 1.108 1.662 2.216 2.77 2.462 2.454 1. (CXX) g++ options: -march=native -mno-avx
SVT-AV1 Encoder Mode: Preset 13 - Input: Beauty 4K 10-bit OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 2.2 Encoder Mode: Preset 13 - Input: Beauty 4K 10-bit a b 1.1606 2.3212 3.4818 4.6424 5.803 5.156 5.158 1. (CXX) g++ options: -march=native -mno-avx
Build2 Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Build2 0.17 Time To Compile a b 80 160 240 320 400 355.31 355.83
Valkey Test: GET - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Valkey 8.0 Test: GET - Parallel Connections: 50 a b 300K 600K 900K 1200K 1500K 1233846.62 639567.12 1. (CC) gcc options: -O3 -flto=auto -ggdb -rdynamic -lm -ldl -pthread -lrt -lsystemd
Valkey Test: SET - Parallel Connections: 50 OpenBenchmarking.org Requests Per Second, More Is Better Valkey 8.0 Test: SET - Parallel Connections: 50 a b 60K 120K 180K 240K 300K 284911.09 286367.00 1. (CC) gcc options: -O3 -flto=auto -ggdb -rdynamic -lm -ldl -pthread -lrt -lsystemd
Valkey Test: GET - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Valkey 8.0 Test: GET - Parallel Connections: 500 a b 100K 200K 300K 400K 500K 483202.47 479421.00 1. (CC) gcc options: -O3 -flto=auto -ggdb -rdynamic -lm -ldl -pthread -lrt -lsystemd
Valkey Test: GET - Parallel Connections: 800 OpenBenchmarking.org Requests Per Second, More Is Better Valkey 8.0 Test: GET - Parallel Connections: 800 a b 140K 280K 420K 560K 700K 471725.41 633384.94 1. (CC) gcc options: -O3 -flto=auto -ggdb -rdynamic -lm -ldl -pthread -lrt -lsystemd
Valkey Test: SET - Parallel Connections: 500 OpenBenchmarking.org Requests Per Second, More Is Better Valkey 8.0 Test: SET - Parallel Connections: 500 a b 60K 120K 180K 240K 300K 277074.75 277556.88 1. (CC) gcc options: -O3 -flto=auto -ggdb -rdynamic -lm -ldl -pthread -lrt -lsystemd
Valkey Test: SET - Parallel Connections: 800 OpenBenchmarking.org Requests Per Second, More Is Better Valkey 8.0 Test: SET - Parallel Connections: 800 a b 60K 120K 180K 240K 300K 277457.94 274828.56 1. (CC) gcc options: -O3 -flto=auto -ggdb -rdynamic -lm -ldl -pthread -lrt -lsystemd
Apache Cassandra Test: Writes OpenBenchmarking.org Op/s, More Is Better Apache Cassandra 5.0 Test: Writes a b 20K 40K 60K 80K 100K 83738 77291
simdjson Throughput Test: Kostya OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: Kostya a b 0.9765 1.953 2.9295 3.906 4.8825 4.34 4.29 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: TopTweet OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: TopTweet a b 2 4 6 8 10 7.09 7.08 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: LargeRandom OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: LargeRandom a b 0.3555 0.711 1.0665 1.422 1.7775 1.56 1.58 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: PartialTweets OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: PartialTweets a b 2 4 6 8 10 6.14 6.71 1. (CXX) g++ options: -O3 -lrt
simdjson Throughput Test: DistinctUserID OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: DistinctUserID a b 2 4 6 8 10 6.53 7.12 1. (CXX) g++ options: -O3 -lrt
BYTE Unix Benchmark Computational Test: Pipe OpenBenchmarking.org LPS, More Is Better BYTE Unix Benchmark 5.1.3-git Computational Test: Pipe a b 4M 8M 12M 16M 20M 19538264.1 19506864.5 1. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm
BYTE Unix Benchmark Computational Test: Dhrystone 2 OpenBenchmarking.org LPS, More Is Better BYTE Unix Benchmark 5.1.3-git Computational Test: Dhrystone 2 a b 100M 200M 300M 400M 500M 443801958.8 446018931.8 1. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm
BYTE Unix Benchmark Computational Test: System Call OpenBenchmarking.org LPS, More Is Better BYTE Unix Benchmark 5.1.3-git Computational Test: System Call a b 5M 10M 15M 20M 25M 21494117.9 21479957.2 1. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm
BYTE Unix Benchmark Computational Test: Whetstone Double OpenBenchmarking.org MWIPS, More Is Better BYTE Unix Benchmark 5.1.3-git Computational Test: Whetstone Double a b 20K 40K 60K 80K 100K 106770.1 110127.3 1. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm
Phoronix Test Suite v10.8.5