aug11 AMD Ryzen Threadripper 3970X 32-Core testing with a ASUS ROG ZENITH II EXTREME (1802 BIOS) and AMD Radeon RX 5700 8GB on Ubuntu 22.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2408118-NE-AUG11334824&grs .
aug11 Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution a b AMD Ryzen Threadripper 3970X 32-Core @ 3.70GHz (32 Cores / 64 Threads) ASUS ROG ZENITH II EXTREME (1802 BIOS) AMD Starship/Matisse 4 x 16GB DDR4-3600MT/s Corsair CMT64GX4M4Z3600C16 Samsung SSD 980 PRO 500GB AMD Radeon RX 5700 8GB (1750/875MHz) AMD Navi 10 HDMI Audio ASUS VP28U Aquantia AQC107 NBase-T/IEEE + Intel I211 + Intel Wi-Fi 6 AX200 Ubuntu 22.04 6.5.0-35-generic (x86_64) GNOME Shell 42.9 X Server + Wayland 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.54) 1.2.204 GCC 11.4.0 ext4 3840x2160 OpenBenchmarking.org Kernel Details - Transparent Huge Pages: madvise Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x830107a Python Details - Python 3.10.12 Security Details - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected
aug11 xnnpack: FP16MobileNetV3Small mnn: SqueezeNetV1.0 mnn: MobileNetV2_224 pyperformance: asyncio_tcp_ssl ospray: particle_volume/pathtracer/real_time mnn: mobilenet-v1-1.0 xnnpack: QU8MobileNetV3Large z3: 1.smt2 apache-siege: 500 pyperformance: async_tree_io z3: 2.smt2 lczero: Eigen mnn: nasnet graphics-magick: HWB Color Space xnnpack: QU8MobileNetV2 mnn: inception-v3 stockfish: Chess Benchmark lczero: BLAS simdjson: TopTweet graphics-magick: Swirl mnn: mobilenetV3 ospray: gravity_spheres_volume/dim_512/ao/real_time compress-lz4: 12 - Compression Speed xnnpack: FP32MobileNetV3Small graphics-magick: Rotate graphics-magick: Resizing graphics-magick: Noise-Gaussian apache-siege: 1000 apache-siege: 200 xnnpack: FP32MobileNetV3Large mnn: resnet-v2-50 simdjson: LargeRand xnnpack: FP32MobileNetV2 pyperformance: nbody graphics-magick: Enhanced xnnpack: FP16MobileNetV2 pyperformance: raytrace simdjson: PartialTweets xnnpack: FP16MobileNetV3Large pyperformance: python_startup ospray: gravity_spheres_volume/dim_512/scivis/real_time compress-lz4: 9 - Compression Speed povray: Trace Time pyperformance: gc_collect pyperformance: django_template pyperformance: go compress-lz4: 1 - Compression Speed blender: BMW27 - CPU-Only namd: ATPase with 327,506 Atoms pyperformance: json_loads build2: Time To Compile pyperformance: asyncio_websockets blender: Pabellon Barcelona - CPU-Only pyperformance: xml_etree compress-lz4: 3 - Decompression Speed compress-lz4: 12 - Decompression Speed compress-lz4: 2 - Decompression Speed compress-lz4: 2 - Compression Speed simdjson: DistinctUserID x265: Bosphorus 1080p ospray: gravity_spheres_volume/dim_512/pathtracer/real_time compress-lz4: 1 - Decompression Speed blender: Junkshop - CPU-Only blender: Classroom - CPU-Only y-cruncher: 1B gcrypt: namd: STMV with 1,066,628 Atoms compress-lz4: 9 - Decompression Speed pyperformance: float mnn: squeezenetv1.1 ospray: particle_volume/ao/real_time ospray: particle_volume/scivis/real_time compress-lz4: 3 - Compression Speed xnnpack: QU8MobileNetV3Small x265: Bosphorus 4K mt-dgemm: Sustained Floating-Point Rate blender: Fishy Cat - CPU-Only blender: Barbershop - CPU-Only y-cruncher: 500M apache-siege: 100 pyperformance: pickle_pure_python pyperformance: regex_compile pyperformance: crypto_pyaes pyperformance: pathlib pyperformance: chaos oidn: RTLightmap.hdr.4096x4096 - CPU-Only oidn: RT.ldr_alb_nrm.3840x2160 - CPU-Only oidn: RT.hdr_alb_nrm.3840x2160 - CPU-Only graphics-magick: Sharpen simdjson: Kostya compress-7zip: a b 2157 4.398 3.325 2.58 156.566 2.777 2864 29.949 58816.75 1.61 75.386 95 11.933 112 1864 21.024 74641956 108 4.51 234 2.069 4.83815 13.7 2075 78 239 82 60498.22 63795.85 3917 19.603 0.99 2951 120 124 2129 418 4.38 2971 16.4 4.62827 41.24 14.587 937 38.2 204 725.7 43.18 2.03075 23.7 84.727 511 136.4 65.5 4324.8 4608.3 4119.6 332.39 4.37 44.79 7.20073 4748.2 62.72 124.7 15.93 210.32 0.57070 4549 95.7 3.148 9.78723 9.68535 109.87 2240 23.39 1357.661782 59.22 456.26 7.609 392 154 107 21.9 99.2 0.61 1.23 1.23 46 2.95 1837 4.994 3.702 2.35 147.584 2.943 2726 31.401 61173.08 1.55 78.269 92 12.305 109 1819 20.534 76258524 106 4.43 230 2.036 4.90771 13.51 2048 77 236 81 61217.14 64516.13 3874 19.809 1 2926 119 123 2112 421 4.35 2951 16.5 4.65541 41.48 14.505 932 38.4 205 729.25 42.99 2.02197 23.6 85.077 509 136.9 65.7 4337.5 4621.3 4130.8 331.54 4.38 44.69 7.21615 4757.3 62.62 124.89 15.908 210.601 0.56994 4543.8 95.6 3.151 9.79525 9.68053 109.92 2239 23.4 1357.405025 59.21 456.32 7.608 76628.35 392 154 107 21.9 99.2 0.61 1.23 1.23 46 2.95 OpenBenchmarking.org
XNNPACK Model: FP16MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV3Small a b 500 1000 1500 2000 2500 2157 1837 1. (CXX) g++ options: -O3 -lrt -lm
Mobile Neural Network Model: SqueezeNetV1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: SqueezeNetV1.0 a b 1.1237 2.2474 3.3711 4.4948 5.6185 4.398 4.994 MIN: 4.2 / MAX: 5.09 MIN: 4.45 / MAX: 6.85 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Mobile Neural Network Model: MobileNetV2_224 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: MobileNetV2_224 a b 0.833 1.666 2.499 3.332 4.165 3.325 3.702 MIN: 3.17 / MAX: 3.76 MIN: 3.38 / MAX: 4.32 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
PyPerformance Benchmark: asyncio_tcp_ssl OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: asyncio_tcp_ssl a b 0.5805 1.161 1.7415 2.322 2.9025 2.58 2.35
OSPRay Benchmark: particle_volume/pathtracer/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: particle_volume/pathtracer/real_time a b 30 60 90 120 150 156.57 147.58
Mobile Neural Network Model: mobilenet-v1-1.0 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: mobilenet-v1-1.0 a b 0.6622 1.3244 1.9866 2.6488 3.311 2.777 2.943 MIN: 2.56 / MAX: 3.07 MIN: 2.61 / MAX: 4.02 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
XNNPACK Model: QU8MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV3Large a b 600 1200 1800 2400 3000 2864 2726 1. (CXX) g++ options: -O3 -lrt -lm
Z3 Theorem Prover SMT File: 1.smt2 OpenBenchmarking.org Seconds, Fewer Is Better Z3 Theorem Prover 4.12.1 SMT File: 1.smt2 a b 7 14 21 28 35 29.95 31.40 1. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC
Apache Siege Concurrent Users: 500 OpenBenchmarking.org Transactions Per Second, More Is Better Apache Siege 2.4.62 Concurrent Users: 500 a b 13K 26K 39K 52K 65K 58816.75 61173.08 1. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto -lz
PyPerformance Benchmark: async_tree_io OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: async_tree_io a b 0.3623 0.7246 1.0869 1.4492 1.8115 1.61 1.55
Z3 Theorem Prover SMT File: 2.smt2 OpenBenchmarking.org Seconds, Fewer Is Better Z3 Theorem Prover 4.12.1 SMT File: 2.smt2 a b 20 40 60 80 100 75.39 78.27 1. (CXX) g++ options: -lpthread -std=c++17 -fvisibility=hidden -mfpmath=sse -msse -msse2 -O3 -fPIC
LeelaChessZero Backend: Eigen OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.31.1 Backend: Eigen a b 20 40 60 80 100 95 92 1. (CXX) g++ options: -flto -pthread
Mobile Neural Network Model: nasnet OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: nasnet a b 3 6 9 12 15 11.93 12.31 MIN: 11.79 / MAX: 13.67 MIN: 11.95 / MAX: 14.06 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
GraphicsMagick Operation: HWB Color Space OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick Operation: HWB Color Space a b 30 60 90 120 150 112 109 1. GraphicsMagick 1.3.38 2022-03-26 Q16 http://www.GraphicsMagick.org/
XNNPACK Model: QU8MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV2 a b 400 800 1200 1600 2000 1864 1819 1. (CXX) g++ options: -O3 -lrt -lm
Mobile Neural Network Model: inception-v3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: inception-v3 a b 5 10 15 20 25 21.02 20.53 MIN: 20.73 / MAX: 24.14 MIN: 20.07 / MAX: 24.29 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
Stockfish Chess Benchmark OpenBenchmarking.org Nodes Per Second, More Is Better Stockfish Chess Benchmark a b 16M 32M 48M 64M 80M 74641956 76258524 1. Stockfish 14.1 by the Stockfish developers (see AUTHORS file)
LeelaChessZero Backend: BLAS OpenBenchmarking.org Nodes Per Second, More Is Better LeelaChessZero 0.31.1 Backend: BLAS a b 20 40 60 80 100 108 106 1. (CXX) g++ options: -flto -pthread
simdjson Throughput Test: TopTweet OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: TopTweet a b 1.0148 2.0296 3.0444 4.0592 5.074 4.51 4.43 1. (CXX) g++ options: -O3 -lrt
GraphicsMagick Operation: Swirl OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick Operation: Swirl a b 50 100 150 200 250 234 230 1. GraphicsMagick 1.3.38 2022-03-26 Q16 http://www.GraphicsMagick.org/
Mobile Neural Network Model: mobilenetV3 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: mobilenetV3 a b 0.4655 0.931 1.3965 1.862 2.3275 2.069 2.036 MIN: 1.89 / MAX: 2.54 MIN: 1.91 / MAX: 2.55 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
OSPRay Benchmark: gravity_spheres_volume/dim_512/ao/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: gravity_spheres_volume/dim_512/ao/real_time a b 1.1042 2.2084 3.3126 4.4168 5.521 4.83815 4.90771
LZ4 Compression Compression Level: 12 - Compression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 12 - Compression Speed a b 4 8 12 16 20 13.70 13.51 1. (CC) gcc options: -O3 -pthread
XNNPACK Model: FP32MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV3Small a b 400 800 1200 1600 2000 2075 2048 1. (CXX) g++ options: -O3 -lrt -lm
GraphicsMagick Operation: Rotate OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick Operation: Rotate a b 20 40 60 80 100 78 77 1. GraphicsMagick 1.3.38 2022-03-26 Q16 http://www.GraphicsMagick.org/
GraphicsMagick Operation: Resizing OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick Operation: Resizing a b 50 100 150 200 250 239 236 1. GraphicsMagick 1.3.38 2022-03-26 Q16 http://www.GraphicsMagick.org/
GraphicsMagick Operation: Noise-Gaussian OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick Operation: Noise-Gaussian a b 20 40 60 80 100 82 81 1. GraphicsMagick 1.3.38 2022-03-26 Q16 http://www.GraphicsMagick.org/
Apache Siege Concurrent Users: 1000 OpenBenchmarking.org Transactions Per Second, More Is Better Apache Siege 2.4.62 Concurrent Users: 1000 a b 13K 26K 39K 52K 65K 60498.22 61217.14 1. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto -lz
Apache Siege Concurrent Users: 200 OpenBenchmarking.org Transactions Per Second, More Is Better Apache Siege 2.4.62 Concurrent Users: 200 a b 14K 28K 42K 56K 70K 63795.85 64516.13 1. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto -lz
XNNPACK Model: FP32MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV3Large a b 800 1600 2400 3200 4000 3917 3874 1. (CXX) g++ options: -O3 -lrt -lm
Mobile Neural Network Model: resnet-v2-50 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: resnet-v2-50 a b 5 10 15 20 25 19.60 19.81 MIN: 18.89 / MAX: 21.14 MIN: 19.13 / MAX: 21.93 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
simdjson Throughput Test: LargeRandom OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: LargeRandom a b 0.225 0.45 0.675 0.9 1.125 0.99 1.00 1. (CXX) g++ options: -O3 -lrt
XNNPACK Model: FP32MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP32MobileNetV2 a b 600 1200 1800 2400 3000 2951 2926 1. (CXX) g++ options: -O3 -lrt -lm
PyPerformance Benchmark: nbody OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: nbody a b 30 60 90 120 150 120 119
GraphicsMagick Operation: Enhanced OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick Operation: Enhanced a b 30 60 90 120 150 124 123 1. GraphicsMagick 1.3.38 2022-03-26 Q16 http://www.GraphicsMagick.org/
XNNPACK Model: FP16MobileNetV2 OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV2 a b 500 1000 1500 2000 2500 2129 2112 1. (CXX) g++ options: -O3 -lrt -lm
PyPerformance Benchmark: raytrace OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: raytrace a b 90 180 270 360 450 418 421
simdjson Throughput Test: PartialTweets OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: PartialTweets a b 0.9855 1.971 2.9565 3.942 4.9275 4.38 4.35 1. (CXX) g++ options: -O3 -lrt
XNNPACK Model: FP16MobileNetV3Large OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: FP16MobileNetV3Large a b 600 1200 1800 2400 3000 2971 2951 1. (CXX) g++ options: -O3 -lrt -lm
PyPerformance Benchmark: python_startup OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: python_startup a b 4 8 12 16 20 16.4 16.5
OSPRay Benchmark: gravity_spheres_volume/dim_512/scivis/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time a b 1.0475 2.095 3.1425 4.19 5.2375 4.62827 4.65541
LZ4 Compression Compression Level: 9 - Compression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 9 - Compression Speed a b 9 18 27 36 45 41.24 41.48 1. (CC) gcc options: -O3 -pthread
POV-Ray Trace Time OpenBenchmarking.org Seconds, Fewer Is Better POV-Ray Trace Time a b 4 8 12 16 20 14.59 14.51 1. POV-Ray 3.7.0.10.unofficial
PyPerformance Benchmark: gc_collect OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: gc_collect a b 200 400 600 800 1000 937 932
PyPerformance Benchmark: django_template OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: django_template a b 9 18 27 36 45 38.2 38.4
PyPerformance Benchmark: go OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: go a b 40 80 120 160 200 204 205
LZ4 Compression Compression Level: 1 - Compression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 1 - Compression Speed a b 160 320 480 640 800 725.70 729.25 1. (CC) gcc options: -O3 -pthread
Blender Blend File: BMW27 - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.2 Blend File: BMW27 - Compute: CPU-Only a b 10 20 30 40 50 43.18 42.99
NAMD Input: ATPase with 327,506 Atoms OpenBenchmarking.org ns/day, More Is Better NAMD 3.0b6 Input: ATPase with 327,506 Atoms a b 0.4569 0.9138 1.3707 1.8276 2.2845 2.03075 2.02197
PyPerformance Benchmark: json_loads OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: json_loads a b 6 12 18 24 30 23.7 23.6
Build2 Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Build2 0.17 Time To Compile a b 20 40 60 80 100 84.73 85.08
PyPerformance Benchmark: asyncio_websockets OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: asyncio_websockets a b 110 220 330 440 550 511 509
Blender Blend File: Pabellon Barcelona - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.2 Blend File: Pabellon Barcelona - Compute: CPU-Only a b 30 60 90 120 150 136.4 136.9
PyPerformance Benchmark: xml_etree OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: xml_etree a b 15 30 45 60 75 65.5 65.7
LZ4 Compression Compression Level: 3 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 3 - Decompression Speed a b 900 1800 2700 3600 4500 4324.8 4337.5 1. (CC) gcc options: -O3 -pthread
LZ4 Compression Compression Level: 12 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 12 - Decompression Speed a b 1000 2000 3000 4000 5000 4608.3 4621.3 1. (CC) gcc options: -O3 -pthread
LZ4 Compression Compression Level: 2 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 2 - Decompression Speed a b 900 1800 2700 3600 4500 4119.6 4130.8 1. (CC) gcc options: -O3 -pthread
LZ4 Compression Compression Level: 2 - Compression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 2 - Compression Speed a b 70 140 210 280 350 332.39 331.54 1. (CC) gcc options: -O3 -pthread
simdjson Throughput Test: DistinctUserID OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: DistinctUserID a b 0.9855 1.971 2.9565 3.942 4.9275 4.37 4.38 1. (CXX) g++ options: -O3 -lrt
x265 Video Input: Bosphorus 1080p OpenBenchmarking.org Frames Per Second, More Is Better x265 Video Input: Bosphorus 1080p a b 10 20 30 40 50 44.79 44.69 1. x265 [info]: HEVC encoder version 3.5+1-f0c1022b6
OSPRay Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time a b 2 4 6 8 10 7.20073 7.21615
LZ4 Compression Compression Level: 1 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 1 - Decompression Speed a b 1000 2000 3000 4000 5000 4748.2 4757.3 1. (CC) gcc options: -O3 -pthread
Blender Blend File: Junkshop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.2 Blend File: Junkshop - Compute: CPU-Only a b 14 28 42 56 70 62.72 62.62
Blender Blend File: Classroom - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.2 Blend File: Classroom - Compute: CPU-Only a b 30 60 90 120 150 124.70 124.89
Y-Cruncher Pi Digits To Calculate: 1B OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.5 Pi Digits To Calculate: 1B a b 4 8 12 16 20 15.93 15.91
Gcrypt Library OpenBenchmarking.org Seconds, Fewer Is Better Gcrypt Library 1.10.3 a b 50 100 150 200 250 210.32 210.60 1. (CC) gcc options: -O2 -fvisibility=hidden
NAMD Input: STMV with 1,066,628 Atoms OpenBenchmarking.org ns/day, More Is Better NAMD 3.0b6 Input: STMV with 1,066,628 Atoms a b 0.1284 0.2568 0.3852 0.5136 0.642 0.57070 0.56994
LZ4 Compression Compression Level: 9 - Decompression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 9 - Decompression Speed a b 1000 2000 3000 4000 5000 4549.0 4543.8 1. (CC) gcc options: -O3 -pthread
PyPerformance Benchmark: float OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: float a b 20 40 60 80 100 95.7 95.6
Mobile Neural Network Model: squeezenetv1.1 OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.9.b11b7037d Model: squeezenetv1.1 a b 0.709 1.418 2.127 2.836 3.545 3.148 3.151 MIN: 2.96 / MAX: 3.56 MIN: 3.01 / MAX: 3.71 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -pthread -ldl
OSPRay Benchmark: particle_volume/ao/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: particle_volume/ao/real_time a b 3 6 9 12 15 9.78723 9.79525
OSPRay Benchmark: particle_volume/scivis/real_time OpenBenchmarking.org Items Per Second, More Is Better OSPRay 3.2 Benchmark: particle_volume/scivis/real_time a b 3 6 9 12 15 9.68535 9.68053
LZ4 Compression Compression Level: 3 - Compression Speed OpenBenchmarking.org MB/s, More Is Better LZ4 Compression 1.10 Compression Level: 3 - Compression Speed a b 20 40 60 80 100 109.87 109.92 1. (CC) gcc options: -O3 -pthread
XNNPACK Model: QU8MobileNetV3Small OpenBenchmarking.org us, Fewer Is Better XNNPACK 2cd86b Model: QU8MobileNetV3Small a b 500 1000 1500 2000 2500 2240 2239 1. (CXX) g++ options: -O3 -lrt -lm
x265 Video Input: Bosphorus 4K OpenBenchmarking.org Frames Per Second, More Is Better x265 Video Input: Bosphorus 4K a b 6 12 18 24 30 23.39 23.40 1. x265 [info]: HEVC encoder version 3.5+1-f0c1022b6
ACES DGEMM Sustained Floating-Point Rate OpenBenchmarking.org GFLOP/s, More Is Better ACES DGEMM 1.0 Sustained Floating-Point Rate a b 300 600 900 1200 1500 1357.66 1357.41 1. (CC) gcc options: -ffast-math -mavx2 -O3 -fopenmp -lopenblas
Blender Blend File: Fishy Cat - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.2 Blend File: Fishy Cat - Compute: CPU-Only a b 13 26 39 52 65 59.22 59.21
Blender Blend File: Barbershop - Compute: CPU-Only OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.2 Blend File: Barbershop - Compute: CPU-Only a b 100 200 300 400 500 456.26 456.32
Y-Cruncher Pi Digits To Calculate: 500M OpenBenchmarking.org Seconds, Fewer Is Better Y-Cruncher 0.8.5 Pi Digits To Calculate: 500M a b 2 4 6 8 10 7.609 7.608
Apache Siege Concurrent Users: 100 OpenBenchmarking.org Transactions Per Second, More Is Better Apache Siege 2.4.62 Concurrent Users: 100 b 16K 32K 48K 64K 80K 76628.35 1. (CC) gcc options: -O2 -lpthread -ldl -lssl -lcrypto -lz
PyPerformance Benchmark: pickle_pure_python OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: pickle_pure_python a b 90 180 270 360 450 392 392
PyPerformance Benchmark: regex_compile OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: regex_compile a b 30 60 90 120 150 154 154
PyPerformance Benchmark: crypto_pyaes OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: crypto_pyaes a b 20 40 60 80 100 107 107
PyPerformance Benchmark: pathlib OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: pathlib a b 5 10 15 20 25 21.9 21.9
PyPerformance Benchmark: chaos OpenBenchmarking.org Milliseconds, Fewer Is Better PyPerformance 1.11 Benchmark: chaos a b 20 40 60 80 100 99.2 99.2
Intel Open Image Denoise Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.3 Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only a b 0.1373 0.2746 0.4119 0.5492 0.6865 0.61 0.61
Intel Open Image Denoise Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.3 Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only a b 0.2768 0.5536 0.8304 1.1072 1.384 1.23 1.23
Intel Open Image Denoise Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only OpenBenchmarking.org Images / Sec, More Is Better Intel Open Image Denoise 2.3 Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only a b 0.2768 0.5536 0.8304 1.1072 1.384 1.23 1.23
GraphicsMagick Operation: Sharpen OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick Operation: Sharpen a b 10 20 30 40 50 46 46 1. GraphicsMagick 1.3.38 2022-03-26 Q16 http://www.GraphicsMagick.org/
simdjson Throughput Test: Kostya OpenBenchmarking.org GB/s, More Is Better simdjson 3.10 Throughput Test: Kostya a b 0.6638 1.3276 1.9914 2.6552 3.319 2.95 2.95 1. (CXX) g++ options: -O3 -lrt
Phoronix Test Suite v10.8.5