AMD Ryzen 9 3900XT 12-Core testing with a MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.B3 BIOS) and AMD Radeon RX 56/64 8GB on Ubuntu 22.04 via the Phoronix Test Suite.
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8701021Graphics Notes: BAR1 / Visible vRAM Size: 256 MB - vBIOS Version: 113-D0500100-102Java Notes: OpenJDK Runtime Environment (build 11.0.20.1+1-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
b Processor: AMD Ryzen 9 3900XT 12-Core @ 3.80GHz (12 Cores / 24 Threads), Motherboard: MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.B3 BIOS), Chipset: AMD Starship/Matisse, Memory: 16GB, Disk: 500GB Seagate FireCuda 520 SSD ZP500GM30002, Graphics: AMD Radeon RX 56/64 8GB (1630/945MHz), Audio: AMD Vega 10 HDMI Audio, Monitor: ASUS MG28U, Network: Realtek Device 2600 + Realtek Killer E3000 2.5GbE + Intel Wi-Fi 6 AX200
OS: Ubuntu 22.04, Kernel: 6.2.0-35-generic (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.49), Vulkan: 1.3.204, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 3840x2160
okt OpenBenchmarking.org Phoronix Test Suite AMD Ryzen 9 3900XT 12-Core @ 3.80GHz (12 Cores / 24 Threads) MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.B3 BIOS) AMD Starship/Matisse 16GB 500GB Seagate FireCuda 520 SSD ZP500GM30002 AMD Radeon RX 56/64 8GB (1630/945MHz) AMD Vega 10 HDMI Audio ASUS MG28U Realtek Device 2600 + Realtek Killer E3000 2.5GbE + Intel Wi-Fi 6 AX200 Ubuntu 22.04 6.2.0-35-generic (x86_64) GNOME Shell 42.2 X Server + Wayland 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.49) 1.3.204 GCC 11.4.0 ext4 3840x2160 Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution Okt Benchmarks System Logs - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096 - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8701021 - BAR1 / Visible vRAM Size: 256 MB - vBIOS Version: 113-D0500100-102 - OpenJDK Runtime Environment (build 11.0.20.1+1-post-Ubuntu-0ubuntu122.04) - Python 3.10.12 - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
a vs. b Comparison Phoronix Test Suite Baseline +57.9% +57.9% +115.8% +115.8% +173.7% +173.7% +231.6% +231.6% 10.5% 10.4% 5.8% 4.3% 4.2% 4.2% 4% 3.9% 3.7% 3.3% 3.1% 2.8% 2.8% 2.7% 2.3% 2.2% 8 231.7% 4 112.2% 2 65.3% 800 - 100 - 500 - 100 800 - 100 - 500 - 100 800 - 100 - 800 - 100 6.3% 800 - 100 - 800 - 100 6.3% Unkeyed Algorithms 1 - 256 - 32 5.5% CPU Stress 5.4% IP Shapes 3D - f32 - CPU 5.3% RTLightmap.hdr.4096x4096 - CPU-Only 4.5% IP Shapes 1D - u8s8f32 - CPU IP Shapes 3D - u8s8f32 - CPU 4.3% 100 - 1000 - Read Only - Average Latency 100 - 1000 - Read Only Garlicoin 800 - 1 - 200 - 400 800 - 100 - 800 - 400 3.7% 800 - 100 - 800 - 400 3.7% Time To Compile H4_ae MEMFD D.P.B 3% IO_uring 2.9% Ringcoin 1 - 256 - 57 2.8% LiH_ae_MSD 800 - 1 - 200 - 400 800 - 1 - 800 - 400 2.5% G.C.S.F 2.4% CPU - blazeface Skeincoin 2.3% RT.hdr_alb_nrm.3840x2160 - CPU-Only 2.2% 800 - 1 - 500 - 100 800 - 100 - 500 - 400 2.1% 800 - 1 - 800 - 400 2% SQLite SQLite SQLite Apache IoTDB Apache IoTDB Apache IoTDB Apache IoTDB Crypto++ Liquid-DSP Stress-NG oneDNN Intel Open Image Denoise oneDNN oneDNN PostgreSQL PostgreSQL Cpuminer-Opt Apache IoTDB Apache IoTDB Apache IoTDB Build2 QMCPACK Stress-NG srsRAN Project Stress-NG Cpuminer-Opt Liquid-DSP QMCPACK Apache IoTDB Apache IoTDB Stress-NG NCNN Cpuminer-Opt Intel Open Image Denoise Apache IoTDB Apache IoTDB Apache IoTDB a b
okt sqlite: 1 sqlite: 2 sqlite: 4 sqlite: 8 3dmark: 1920 x 1080 quantlib: Multi-Threaded quantlib: Single-Threaded cryptopp: Unkeyed Algorithms hpcg: 104 104 104 - 60 cloverleaf: clover_bm cloverleaf: clover_bm64_short cp2k: H20-64 cp2k: Fayalite-FIST libxsmm: 128 libxsmm: 32 libxsmm: 64 palabos: 100 qmcpack: H4_ae qmcpack: Li2_STO_ae qmcpack: LiH_ae_MSD qmcpack: simple-H2O qmcpack: O_ae_pyscf_UHF qmcpack: FeCO6_b3lyp_gms openradioss: Bumper Beam openradioss: Chrysler Neon 1M openradioss: Cell Phone Drop Test openradioss: Bird Strike on Windshield openradioss: Rubber O-Ring Seal Installation openradioss: INIVOL and Fluid Structure Interaction Drop Container z3: 1.smt2 z3: 2.smt2 nekrs: Kershaw nekrs: TurboPipe Periodic srsran: Downlink Processor Benchmark srsran: PUSCH Processor Benchmark, Throughput Total srsran: PUSCH Processor Benchmark, Throughput Thread easywave: e2Asean Grid + BengkuluSept2007 Source - 240 easywave: e2Asean Grid + BengkuluSept2007 Source - 1200 dav1d: Chimera 1080p dav1d: Summer Nature 4K dav1d: Summer Nature 1080p dav1d: Chimera 1080p 10-bit embree: Pathtracer - Crown embree: Pathtracer ISPC - Crown embree: Pathtracer - Asian Dragon embree: Pathtracer - Asian Dragon Obj embree: Pathtracer ISPC - Asian Dragon embree: Pathtracer ISPC - Asian Dragon Obj svt-av1: Preset 4 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 13 - Bosphorus 4K svt-av1: Preset 4 - Bosphorus 1080p svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 12 - Bosphorus 1080p svt-av1: Preset 13 - Bosphorus 1080p vvenc: Bosphorus 4K - Fast vvenc: Bosphorus 4K - Faster vvenc: Bosphorus 1080p - Fast vvenc: Bosphorus 1080p - Faster oidn: RT.hdr_alb_nrm.3840x2160 - CPU-Only oidn: RT.ldr_alb_nrm.3840x2160 - CPU-Only oidn: RTLightmap.hdr.4096x4096 - CPU-Only openvkl: vklBenchmarkCPU ISPC openvkl: vklBenchmarkCPU Scalar ospray: particle_volume/ao/real_time ospray: particle_volume/scivis/real_time ospray: particle_volume/pathtracer/real_time ospray: gravity_spheres_volume/dim_512/ao/real_time ospray: gravity_spheres_volume/dim_512/scivis/real_time ospray: gravity_spheres_volume/dim_512/pathtracer/real_time avifenc: 0 avifenc: 2 avifenc: 6 avifenc: 6, Lossless avifenc: 10, Lossless build-gcc: Time To Compile build-gem5: Time To Compile build-godot: Time To Compile build-llvm: Ninja build-llvm: Unix Makefiles build-nodejs: Time To Compile build2: Time To Compile onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Recurrent Neural Network Inference - u8s8f32 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU ospray-studio: 1 - 4K - 1 - Path Tracer - CPU ospray-studio: 2 - 4K - 1 - Path Tracer - CPU ospray-studio: 3 - 4K - 1 - Path Tracer - CPU ospray-studio: 1 - 4K - 16 - Path Tracer - CPU ospray-studio: 1 - 4K - 32 - Path Tracer - CPU ospray-studio: 2 - 4K - 16 - Path Tracer - CPU ospray-studio: 2 - 4K - 32 - Path Tracer - CPU ospray-studio: 3 - 4K - 16 - Path Tracer - CPU ospray-studio: 3 - 4K - 32 - Path Tracer - CPU ospray-studio: 1 - 1080p - 1 - Path Tracer - CPU ospray-studio: 2 - 1080p - 1 - Path Tracer - CPU ospray-studio: 3 - 1080p - 1 - Path Tracer - CPU ospray-studio: 1 - 1080p - 16 - Path Tracer - CPU ospray-studio: 1 - 1080p - 32 - Path Tracer - CPU ospray-studio: 2 - 1080p - 16 - Path Tracer - CPU ospray-studio: 2 - 1080p - 32 - Path Tracer - CPU ospray-studio: 3 - 1080p - 16 - Path Tracer - CPU ospray-studio: 3 - 1080p - 32 - Path Tracer - CPU encode-opus: WAV To Opus Encode cpuminer-opt: Magi cpuminer-opt: scrypt cpuminer-opt: Deepcoin cpuminer-opt: Ringcoin cpuminer-opt: Blake-2 S cpuminer-opt: Garlicoin cpuminer-opt: Skeincoin cpuminer-opt: Myriad-Groestl cpuminer-opt: LBC, LBRY Credits cpuminer-opt: Quad SHA-256, Pyrite cpuminer-opt: Triple SHA-256, Onecoin liquid-dsp: 1 - 256 - 32 liquid-dsp: 1 - 256 - 57 liquid-dsp: 2 - 256 - 32 liquid-dsp: 2 - 256 - 57 liquid-dsp: 4 - 256 - 32 liquid-dsp: 4 - 256 - 57 liquid-dsp: 8 - 256 - 32 liquid-dsp: 8 - 256 - 57 liquid-dsp: 1 - 256 - 512 liquid-dsp: 16 - 256 - 32 liquid-dsp: 16 - 256 - 57 liquid-dsp: 2 - 256 - 512 liquid-dsp: 24 - 256 - 32 liquid-dsp: 24 - 256 - 57 liquid-dsp: 4 - 256 - 512 liquid-dsp: 8 - 256 - 512 liquid-dsp: 16 - 256 - 512 liquid-dsp: 24 - 256 - 512 apache-iotdb: 800 - 1 - 200 - 100 apache-iotdb: 800 - 1 - 200 - 100 apache-iotdb: 800 - 1 - 200 - 400 apache-iotdb: 800 - 1 - 200 - 400 apache-iotdb: 800 - 1 - 500 - 100 apache-iotdb: 800 - 1 - 500 - 100 apache-iotdb: 800 - 1 - 500 - 400 apache-iotdb: 800 - 1 - 500 - 400 apache-iotdb: 800 - 1 - 800 - 100 apache-iotdb: 800 - 1 - 800 - 100 apache-iotdb: 800 - 1 - 800 - 400 apache-iotdb: 800 - 1 - 800 - 400 apache-iotdb: 800 - 100 - 200 - 100 apache-iotdb: 800 - 100 - 200 - 100 apache-iotdb: 800 - 100 - 500 - 100 apache-iotdb: 800 - 100 - 500 - 100 apache-iotdb: 800 - 100 - 500 - 400 apache-iotdb: 800 - 100 - 500 - 400 apache-iotdb: 800 - 100 - 800 - 100 apache-iotdb: 800 - 100 - 800 - 100 apache-iotdb: 800 - 100 - 800 - 400 apache-iotdb: 800 - 100 - 800 - 400 memcached: 1:10 memcached: 1:100 pgbench: 100 - 1000 - Read Only pgbench: 100 - 1000 - Read Only - Average Latency pgbench: 100 - 1000 - Read Write pgbench: 100 - 1000 - Read Write - Average Latency tensorflow: CPU - 16 - ResNet-50 tensorflow: CPU - 32 - ResNet-50 deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream stress-ng: Hash stress-ng: MMAP stress-ng: NUMA stress-ng: Pipe stress-ng: Poll stress-ng: Zlib stress-ng: Futex stress-ng: MEMFD stress-ng: Mutex stress-ng: Atomic stress-ng: Crypto stress-ng: Malloc stress-ng: Cloning stress-ng: Forking stress-ng: Pthread stress-ng: AVL Tree stress-ng: IO_uring stress-ng: SENDFILE stress-ng: CPU Cache stress-ng: CPU Stress stress-ng: Semaphores stress-ng: Matrix Math stress-ng: Vector Math stress-ng: AVX-512 VNNI stress-ng: Function Call stress-ng: x86_64 RdRand stress-ng: Floating Point stress-ng: Matrix 3D Math stress-ng: Memory Copying stress-ng: Vector Shuffle stress-ng: Mixed Scheduler stress-ng: Socket Activity stress-ng: Wide Vector Math stress-ng: Context Switching stress-ng: Fused Multiply-Add stress-ng: Vector Floating Point stress-ng: Glibc C String Functions stress-ng: Glibc Qsort Data Sorting stress-ng: System V Message Passing gpaw: Carbon Nanotube ncnn: CPU - mobilenet ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU - shufflenet-v2 ncnn: CPU - mnasnet ncnn: CPU - efficientnet-b0 ncnn: CPU - blazeface ncnn: CPU - googlenet ncnn: CPU - vgg16 ncnn: CPU - resnet18 ncnn: CPU - alexnet ncnn: CPU - resnet50 ncnn: CPU - yolov4-tiny ncnn: CPU - squeezenet_ssd ncnn: CPU - regnety_400m ncnn: CPU - vision_transformer ncnn: CPU - FastestDet blender: BMW27 - CPU-Only blender: Classroom - CPU-Only blender: Fishy Cat - CPU-Only blender: Barbershop - CPU-Only blender: Pabellon Barcelona - CPU-Only openvino: Face Detection FP16 - CPU openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection Retail FP16 - CPU openvino: Face Detection Retail FP16 - CPU openvino: Road Segmentation ADAS FP16 - CPU openvino: Road Segmentation ADAS FP16 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Face Detection Retail FP16-INT8 - CPU openvino: Face Detection Retail FP16-INT8 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Handwritten English Recognition FP16 - CPU openvino: Handwritten English Recognition FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU cassandra: Writes hadoop: Create - 20 - 100000 nginx: 100 nginx: 200 nginx: 500 nginx: 1000 whisper-cpp: ggml-base.en - 2016 State of the Union whisper-cpp: ggml-small.en - 2016 State of the Union whisper-cpp: ggml-medium.en - 2016 State of the Union brl-cad: VGR Performance Metric a b 15.155 26.109 31.238 46.349 251.67 42432.3 3064.1 415.494466 5.10918 134.11 391.22 102.991 171.292 231 55 112.8 40.839 24.37 229.86 111.68 26.114 185.19 169.28 131.78 1612.17 98.05 272.77 134.16 600.03 29.481 75.478 1771410000 2989030000 824.7 2003.6 245.5 12.292 382.078 402.46 196.13 793.04 457.63 15.3439 14.034 16.2704 14.6908 15.5772 13.4423 3.15 43.194 92.08 92.635 8.875 80.613 352.036 397.449 4.422 9.018 14.143 28.39 0.46 0.44 0.23 230 125 3.83843 3.79088 111.963 1.94041 1.82533 3.07954 128.999 63.918 6.329 10.593 6.291 1095.023 460.886 281.547 573.501 591.045 123.554 4.76396 10.6365 1.87701 0.893574 22.3492 7.4047 5.39869 23.944 2.47724 3.44831 3936.51 2418.33 3923.99 2410.82 3937.24 2409.49 10796 11044 12790 178923 354104 182398 359981 210828 415946 2712 2773 3222 49808 92700 50479 94954 57406 108963 29.929 370.56 128.03 4267.42 1882.92 73570 1281.81 18940 6257.66 8168.31 28470 40300 47740000 53633000 88956000 102790000 174840000 200240000 344230000 391000000 10723000 640290000 630380000 21283000 890410000 732320000 39977000 78180000 145080000 198510000 870895 21.46 883206 82.72 1598972 29.57 1680328 108.87 1353913 57.1 1946289 151.1 27513199 68.27 22648486 214.04 23078340 783.24 23311141 335.42 19627767 1540.09 1646210.97 1590140.06 492881 2.029 9823 101.804 10.21 10.05 9.9426 603.4131 8.1515 122.6699 237.5691 25.2289 130.7431 7.644 94.1953 63.6788 51.7574 19.314 31.7265 189.0831 19.8169 50.4539 125.0097 47.9742 85.9707 11.6246 737.2404 8.1177 466.2224 2.1424 55.5772 107.8695 46.3632 21.5583 12.676 473.3056 9.3034 107.4805 124.4685 48.1839 85.5166 11.6861 56.4458 106.2711 46.7111 21.4011 86.3167 69.4918 63.8366 15.6588 12.2373 490.268 11.2075 89.2132 126.6146 47.3217 57.339 17.4329 43.7603 137.0885 32.1494 31.0984 9.9597 602.3735 8.1486 122.7143 2849944.32 169.42 134.28 5085408.46 1274815.51 1529.17 2719437.33 167.93 3757618.65 546.82 30063.86 6492227.26 876.55 25858.27 112428.11 116.18 151644.87 127691.03 1265046.6 31748.06 14365053.84 76846.98 87181.78 527354.87 9373.74 6422.15 4282.52 887.15 3721.25 8782.71 8305.74 7209.11 587699.43 2670705.05 13209967.5 35283.23 13058517.39 370.88 8498512.2 310.499 13.21 4.24 3.64 4.62 3.81 6.21 1.76 14.18 52.11 9.75 9.14 18.14 24.32 11.93 10.3 70.77 5.11 106.56 289.11 134.28 1113.55 345.49 3.14 1907.31 24.67 242.85 24.7 242.84 150.52 39.83 4.3 1389.42 674.92 8.87 39.08 153.43 354.56 16.91 302.35 19.82 1065.97 5.62 158.63 37.79 34.38 174.33 424.82 28.23 328.8 18.23 129.11 92.88 9800.16 1.22 134.26 89.28 14175.7 0.84 110053 18804 79156.02 76304.8 72766.53 63534.92 160.65923 475.27406 1447.53988 189733 15.272 43.165 66.299 153.746 253.07 42491.7 3055.5 439.590685 5.09819 135.45 391.60 102.745 171.272 230.3 54.9 112.7 40.6509 23.6 231.53 108.69 26.329 186.52 169.43 131.94 1610.5 97.7 272.42 134.76 599.25 29.326 76.66 1777340000 2989160000 800.9 1999.6 245.1 12.265 382.269 400.81 196.12 793.9 458.1 15.1862 14.1009 16.3297 14.7704 15.649 13.4374 3.109 43.193 93.275 91.617 8.897 79.873 350.351 394.016 4.415 9.051 14.085 28.412 0.45 0.44 0.22 230 126 3.8418 3.79143 110.473 1.95769 1.83527 3.07569 128.798 63.175 6.238 10.513 6.229 1094.513 457.199 281.706 575.033 600.278 468.621 119.192 4.7208 11.1973 1.79957 0.931971 22.3837 7.51062 5.37146 24.1847 2.46296 3.47437 3965.21 2414.21 3959.79 2405.08 3967.68 2412.87 10809 11022 12792 179435 352945 182424 359095 213159 415202 2718 2762 3213 49490 93216 50427 94957 57662 108921 29.64 371.36 128.06 4307.38 1936.29 73540 1333.62 18510 6205.35 8157.18 28390 40150 45239000 52191000 90487000 104190000 176360000 201440000 341300000 395730000 10761000 633870000 631120000 21268000 890730000 721140000 39956000 78962000 143910000 198780000 876234 21.31 907114 79.59 1634118 29.06 1678639 108.49 1345876 57.24 1907446 154.86 27931670 67.75 25003579 193.72 22919674 800.04 21936964 356.7 18927006 1597.52 1656534.13 1598152.08 513618 1.947 9833 101.701 10.29 10.06 9.9514 601.8748 8.1693 122.4024 237.6562 25.2209 131.2231 7.6158 93.8829 63.8902 52.2652 19.126 31.5674 190.0338 19.6988 50.7568 125.1795 47.9091 85.8786 11.637 737.749 8.1122 467.1283 2.1383 55.8768 107.3495 46.4968 21.497 12.6878 472.8641 9.3103 107.4009 124.7254 48.065 85.4024 11.7019 56.3444 106.3662 46.6825 21.415 86.3419 69.4695 64.0296 15.6117 12.3051 487.5642 11.2046 89.2356 126.9044 47.2413 57.5387 17.372 44.0421 136.2151 32.3176 30.9366 9.9696 600.8695 8.1412 122.8253 2861814.33 168.18 134.69 5137270.21 1269594.39 1536.76 2724951.63 173.1 3763560.06 550.8 30023.08 6507116.41 876.19 26129.74 112860.19 116.68 147373.62 127965.64 1276371.48 30131.49 14280745.55 77971.11 87270.26 528272.84 9463.1 6421.87 4272.51 871.12 3725.74 8803.33 8320.26 7240.74 584000.31 2690991.53 13224332.05 35339.57 12751183.72 370.05 8490199.38 310.453 13.2 4.23 3.62 4.6 3.81 6.18 1.72 14.03 52.55 9.82 9.23 17.98 24.27 11.91 10.14 70.42 5.09 107.52 290.28 133.77 1113.73 343.58 3.14 1894.17 24.42 245.33 24.3 246.67 149.62 40.08 4.3 1388.86 678.1 8.83 39.27 152.68 354.06 16.93 303.21 19.76 1067.32 5.61 158.31 37.87 34.87 171.89 425.1 28.21 325.14 18.44 130.15 92.11 9776.02 1.22 134.3 89.31 14139.29 0.84 111134 19128 78192.19 76041.37 72403.86 62870.64 159.8743 471.32178 1472.8675 190422 OpenBenchmarking.org
SQLite This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database with a variable number of concurrent repetitions -- up to the maximum number of CPU threads available. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.41.2 Threads / Copies: 1 a b 4 8 12 16 20 15.16 15.27 1. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm
3DMark Wild Life Extreme This test profile only automates the vendor build of 3DMark with its command-line / JSON support. If you do not have a licensed copy of the necessary 3DMark binaries in your Phoronix Test Suite download cache on your system, this test profile will not do anything and simply fail. You must have already obtained the proper licensed binaries from UL for this test profile to work -- this test profile simply automates the firing of the 3DMark benchmark at your desired resolution and capturing the results within the Phoronix Test Suite while you must already have the necessary 3DMark files on your system. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better 3DMark Wild Life Extreme 1.1.2.1 Resolution: 1920 x 1080 a b 60 120 180 240 300 251.67 253.07
QuantLib QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.32 Configuration: Multi-Threaded a b 9K 18K 27K 36K 45K 42432.3 42491.7 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.32 Configuration: Single-Threaded a b 700 1400 2100 2800 3500 3064.1 3055.5 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenBenchmarking.org Seconds, Fewer Is Better CloverLeaf 1.3 Input: clover_bm64_short a b 80 160 240 320 400 391.22 391.60 1. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
CP2K Molecular Dynamics CP2K is an open-source molecular dynamics software package focused on quantum chemistry and solid-state physics. More details on the CP2K benchmark test cases and details can be found @ https://www.cp2k.org/performance Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better CP2K Molecular Dynamics 2023.1 Input: H20-64 a b 20 40 60 80 100 102.99 102.75 1. (F9X) gfortran options: -fopenmp -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kdbm -lcp2kgrid -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -lhdf5 -lhdf5_hl -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -lopenblas -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm
Input: H2O-DFT-LS
a: The test quit with a non-zero exit status. E: mpirun noticed that process rank 6 with PID 0 on node phoronix-MS-7C34 exited on signal 9 (Killed).
b: The test quit with a non-zero exit status. E: mpirun noticed that process rank 10 with PID 0 on node phoronix-MS-7C34 exited on signal 9 (Killed).
OpenBenchmarking.org Seconds, Fewer Is Better CP2K Molecular Dynamics 2023.1 Input: Fayalite-FIST a b 40 80 120 160 200 171.29 171.27 1. (F9X) gfortran options: -fopenmp -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kdbm -lcp2kgrid -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -lhdf5 -lhdf5_hl -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -lopenblas -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm
libxsmm Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 128 a b 50 100 150 200 250 231.0 230.3 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -pedantic -O2 -fopenmp -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 32 a b 12 24 36 48 60 55.0 54.9 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -pedantic -O2 -fopenmp -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 64 a b 30 60 90 120 150 112.8 112.7 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -pedantic -O2 -fopenmp -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2
Palabos The Palabos library is a framework for general purpose Computational Fluid Dynamics (CFD). Palabos uses a kernel based on the Lattice Boltzmann method. This test profile uses the Palabos MPI-based Cavity3D benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mega Site Updates Per Second, More Is Better Palabos 2.3 Grid Size: 100 a b 9 18 27 36 45 40.84 40.65 1. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm
QMCPACK QMCPACK is a modern high-performance open-source Quantum Monte Carlo (QMC) simulation code making use of MPI for this benchmark of the H20 example code. QMCPACK is an open-source production level many-body ab initio Quantum Monte Carlo code for computing the electronic structure of atoms, molecules, and solids. QMCPACK is supported by the U.S. Department of Energy. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: H4_ae a b 6 12 18 24 30 24.37 23.60 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: Li2_STO_ae a b 50 100 150 200 250 229.86 231.53 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: LiH_ae_MSD a b 30 60 90 120 150 111.68 108.69 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: simple-H2O a b 6 12 18 24 30 26.11 26.33 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: O_ae_pyscf_UHF a b 40 80 120 160 200 185.19 186.52 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenBenchmarking.org Total Execution Time - Seconds, Fewer Is Better QMCPACK 3.17.1 Input: FeCO6_b3lyp_gms a b 40 80 120 160 200 169.28 169.43 1. (CXX) g++ options: -fopenmp -foffload=disable -finline-limit=1000 -fstrict-aliasing -funroll-all-loops -ffast-math -march=native -O3 -lm -ldl
OpenRadioss OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2023.09.15 Model: Bumper Beam a b 30 60 90 120 150 131.78 131.94
nekRS nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming on smaller systems. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org flops/rank, More Is Better nekRS 23.0 Input: Kershaw a b 400M 800M 1200M 1600M 2000M 1771410000 1777340000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi
OpenBenchmarking.org flops/rank, More Is Better nekRS 23.0 Input: TurboPipe Periodic a b 600M 1200M 1800M 2400M 3000M 2989030000 2989160000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -rdynamic -lmpi_cxx -lmpi
srsRAN Project srsRAN Project is a complete ORAN-native 5G RAN solution created by Software Radio Systems (SRS). The srsRAN Project radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mbps, More Is Better srsRAN Project 23.5 Test: Downlink Processor Benchmark a b 200 400 600 800 1000 824.7 800.9 1. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
OpenBenchmarking.org Mbps, More Is Better srsRAN Project 23.5 Test: PUSCH Processor Benchmark, Throughput Total a b 400 800 1200 1600 2000 2003.6 1999.6 1. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
OpenBenchmarking.org Mbps, More Is Better srsRAN Project 23.5 Test: PUSCH Processor Benchmark, Throughput Thread a b 50 100 150 200 250 245.5 245.1 1. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
easyWave The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 a b 3 6 9 12 15 12.29 12.27 1. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 a b 80 160 240 320 400 382.08 382.27 1. (CXX) g++ options: -O3 -fopenmp
Embree OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Crown a b 4 8 12 16 20 15.34 15.19 MIN: 15.22 / MAX: 15.63 MIN: 15.07 / MAX: 15.45
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Crown a b 4 8 12 16 20 14.03 14.10 MIN: 13.92 / MAX: 14.37 MIN: 13.99 / MAX: 14.35
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon a b 4 8 12 16 20 16.27 16.33 MIN: 16.19 / MAX: 16.52 MIN: 16.24 / MAX: 16.6
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon Obj a b 4 8 12 16 20 14.69 14.77 MIN: 14.61 / MAX: 14.96 MIN: 14.69 / MAX: 15.05
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon a b 4 8 12 16 20 15.58 15.65 MIN: 15.5 / MAX: 15.87 MIN: 15.56 / MAX: 15.92
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon Obj a b 3 6 9 12 15 13.44 13.44 MIN: 13.37 / MAX: 13.61 MIN: 13.37 / MAX: 13.62
SVT-AV1 OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 4K a b 0.7088 1.4176 2.1264 2.8352 3.544 3.150 3.109 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 4K a b 10 20 30 40 50 43.19 43.19 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 4K a b 20 40 60 80 100 92.08 93.28 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 4K a b 20 40 60 80 100 92.64 91.62 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 4 - Input: Bosphorus 1080p a b 2 4 6 8 10 8.875 8.897 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 8 - Input: Bosphorus 1080p a b 20 40 60 80 100 80.61 79.87 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 12 - Input: Bosphorus 1080p a b 80 160 240 320 400 352.04 350.35 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.7 Encoder Mode: Preset 13 - Input: Bosphorus 1080p a b 90 180 270 360 450 397.45 394.02 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
VVenC VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Fast a b 0.995 1.99 2.985 3.98 4.975 4.422 4.415 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Faster a b 3 6 9 12 15 9.018 9.051 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Fast a b 4 8 12 16 20 14.14 14.09 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Faster a b 7 14 21 28 35 28.39 28.41 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OSPRay Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/ao/real_time a b 0.8644 1.7288 2.5932 3.4576 4.322 3.83843 3.84180
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/ao/real_time a b 0.4405 0.881 1.3215 1.762 2.2025 1.94041 1.95769
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time a b 0.4129 0.8258 1.2387 1.6516 2.0645 1.82533 1.83527
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time a b 0.6929 1.3858 2.0787 2.7716 3.4645 3.07954 3.07569
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU a b 1.0719 2.1438 3.2157 4.2876 5.3595 4.76396 4.72080 MIN: 4.51 MIN: 4.46 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU a b 3 6 9 12 15 10.64 11.20 MIN: 10.51 MIN: 11.1 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU a b 0.4223 0.8446 1.2669 1.6892 2.1115 1.87701 1.79957 MIN: 1.77 MIN: 1.75 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU a b 0.2097 0.4194 0.6291 0.8388 1.0485 0.893574 0.931971 MIN: 0.85 MIN: 0.89 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU
a: The test run did not produce a result.
b: The test run did not produce a result.
Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU
a: The test run did not produce a result.
b: The test run did not produce a result.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU a b 5 10 15 20 25 22.35 22.38 MIN: 21.78 MIN: 21.99 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU a b 2 4 6 8 10 7.40470 7.51062 MIN: 4.55 MIN: 4.54 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU a b 1.2147 2.4294 3.6441 4.8588 6.0735 5.39869 5.37146 MIN: 5.27 MIN: 5.28 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU a b 6 12 18 24 30 23.94 24.18 MIN: 23.62 MIN: 23.75 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU a b 0.5574 1.1148 1.6722 2.2296 2.787 2.47724 2.46296 MIN: 2.41 MIN: 2.4 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU a b 0.7817 1.5634 2.3451 3.1268 3.9085 3.44831 3.47437 MIN: 3.31 MIN: 3.34 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU a b 900 1800 2700 3600 4500 3936.51 3965.21 MIN: 3929.07 MIN: 3958.22 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU a b 500 1000 1500 2000 2500 2418.33 2414.21 MIN: 2409.86 MIN: 2407.75 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU a b 800 1600 2400 3200 4000 3923.99 3959.79 MIN: 3912.85 MIN: 3952.22 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU
a: The test run did not produce a result.
b: The test run did not produce a result.
Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU
a: The test run did not produce a result.
b: The test run did not produce a result.
Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU
a: The test run did not produce a result.
b: The test run did not produce a result.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU a b 500 1000 1500 2000 2500 2410.82 2405.08 MIN: 2404.84 MIN: 2398.42 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU a b 900 1800 2700 3600 4500 3937.24 3967.68 MIN: 3929.35 MIN: 3957.76 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU a b 500 1000 1500 2000 2500 2409.49 2412.87 MIN: 2402.26 MIN: 2406.34 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OSPRay Studio Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b 2K 4K 6K 8K 10K 10796 10809
Opus Codec Encoding Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus five times. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Opus Codec Encoding 1.4 WAV To Opus Encode a b 7 14 21 28 35 29.93 29.64 1. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm
Cpuminer-Opt Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Magi a b 80 160 240 320 400 370.56 371.36 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Myriad-Groestl a b 1300 2600 3900 5200 6500 6257.66 6205.35 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 32 a b 10M 20M 30M 40M 50M 47740000 45239000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 57 a b 11M 22M 33M 44M 55M 53633000 52191000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 32 a b 20M 40M 60M 80M 100M 88956000 90487000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 57 a b 20M 40M 60M 80M 100M 102790000 104190000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 32 a b 40M 80M 120M 160M 200M 174840000 176360000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 57 a b 40M 80M 120M 160M 200M 200240000 201440000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 32 a b 70M 140M 210M 280M 350M 344230000 341300000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 57 a b 80M 160M 240M 320M 400M 391000000 395730000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 512 a b 2M 4M 6M 8M 10M 10723000 10761000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 32 a b 140M 280M 420M 560M 700M 640290000 633870000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 57 a b 140M 280M 420M 560M 700M 630380000 631120000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 512 a b 5M 10M 15M 20M 25M 21283000 21268000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 24 - Buffer Length: 256 - Filter Length: 32 a b 200M 400M 600M 800M 1000M 890410000 890730000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 24 - Buffer Length: 256 - Filter Length: 57 a b 160M 320M 480M 640M 800M 732320000 721140000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 512 a b 9M 18M 27M 36M 45M 39977000 39956000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 512 a b 20M 40M 60M 80M 100M 78180000 78962000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 512 a b 30M 60M 90M 120M 150M 145080000 143910000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 24 - Buffer Length: 256 - Filter Length: 512 a b 40M 80M 120M 160M 200M 198510000 198780000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 a b 5 10 15 20 25 21.46 21.31 MAX: 24134.27 MAX: 24136.07
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 a b 200K 400K 600K 800K 1000K 883206 907114
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 a b 20 40 60 80 100 82.72 79.59 MAX: 27561.24 MAX: 27087.19
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 a b 300K 600K 900K 1200K 1500K 1598972 1634118
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 a b 7 14 21 28 35 29.57 29.06 MAX: 24234.87 MAX: 24192.43
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 a b 400K 800K 1200K 1600K 2000K 1680328 1678639
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 a b 20 40 60 80 100 108.87 108.49 MAX: 27415.25 MAX: 27965.48
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 a b 300K 600K 900K 1200K 1500K 1353913 1345876
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 a b 13 26 39 52 65 57.10 57.24 MAX: 24233.95 MAX: 24166.47
OpenBenchmarking.org point/sec, More Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 a b 400K 800K 1200K 1600K 2000K 1946289 1907446
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 a b 30 60 90 120 150 151.10 154.86 MAX: 27639.32 MAX: 28090.85
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 a b 15 30 45 60 75 68.27 67.75 MAX: 24204.26 MAX: 24201.36
Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400
a: The test quit with a non-zero exit status.
b: The test quit with a non-zero exit status.
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 a b 50 100 150 200 250 214.04 193.72 MAX: 24485.71 MAX: 24261.28
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 a b 200 400 600 800 1000 783.24 800.04 MAX: 35456.18 MAX: 34221.05
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 a b 80 160 240 320 400 335.42 356.70 MAX: 24363.54 MAX: 25191.24
OpenBenchmarking.org Average Latency, Fewer Is Better Apache IoTDB 1.2 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 a b 300 600 900 1200 1500 1540.09 1597.52 MAX: 32073.99 MAX: 33007.11
Memcached Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:10 a b 400K 800K 1200K 1600K 2000K 1646210.97 1656534.13 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:100 a b 300K 600K 900K 1200K 1500K 1590140.06 1598152.08 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
DuckDB DuckDB is an in-progress SQL OLAP database management system optimized for analytics and features a vectorized and parallel engine. Learn more via the OpenBenchmarking.org test page.
Benchmark: IMDB
a: The test run did not produce a result.
b: The test run did not produce a result.
Benchmark: TPC-H Parquet
a: The test run did not produce a result.
b: The test run did not produce a result.
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency a b 0.4565 0.913 1.3695 1.826 2.2825 2.029 1.947 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write a b 2K 4K 6K 8K 10K 9823 9833 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 16 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency a b 20 40 60 80 100 101.80 101.70 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
TensorFlow This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries if desired for complementary metrics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.12 Device: CPU - Batch Size: 16 - Model: ResNet-50 a b 3 6 9 12 15 10.21 10.29
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: MMAP a b 40 80 120 160 200 169.42 168.18 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: NUMA a b 30 60 90 120 150 134.28 134.69 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Pipe a b 1.1M 2.2M 3.3M 4.4M 5.5M 5085408.46 5137270.21 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Poll a b 300K 600K 900K 1200K 1500K 1274815.51 1269594.39 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Zlib a b 300 600 900 1200 1500 1529.17 1536.76 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Futex a b 600K 1200K 1800K 2400K 3000K 2719437.33 2724951.63 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: MEMFD a b 40 80 120 160 200 167.93 173.10 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Mutex a b 800K 1600K 2400K 3200K 4000K 3757618.65 3763560.06 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Atomic a b 120 240 360 480 600 546.82 550.80 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Crypto a b 6K 12K 18K 24K 30K 30063.86 30023.08 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Malloc a b 1.4M 2.8M 4.2M 5.6M 7M 6492227.26 6507116.41 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Cloning a b 200 400 600 800 1000 876.55 876.19 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Forking a b 6K 12K 18K 24K 30K 25858.27 26129.74 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Pthread a b 20K 40K 60K 80K 100K 112428.11 112860.19 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: AVL Tree a b 30 60 90 120 150 116.18 116.68 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: IO_uring a b 30K 60K 90K 120K 150K 151644.87 147373.62 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: SENDFILE a b 30K 60K 90K 120K 150K 127691.03 127965.64 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: CPU Cache a b 300K 600K 900K 1200K 1500K 1265046.60 1276371.48 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: CPU Stress a b 7K 14K 21K 28K 35K 31748.06 30131.49 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Semaphores a b 3M 6M 9M 12M 15M 14365053.84 14280745.55 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Matrix Math a b 20K 40K 60K 80K 100K 76846.98 77971.11 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Vector Math a b 20K 40K 60K 80K 100K 87181.78 87270.26 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: AVX-512 VNNI a b 110K 220K 330K 440K 550K 527354.87 528272.84 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Function Call a b 2K 4K 6K 8K 10K 9373.74 9463.10 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: x86_64 RdRand a b 1400 2800 4200 5600 7000 6422.15 6421.87 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Floating Point a b 900 1800 2700 3600 4500 4282.52 4272.51 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Matrix 3D Math a b 200 400 600 800 1000 887.15 871.12 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Memory Copying a b 800 1600 2400 3200 4000 3721.25 3725.74 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Vector Shuffle a b 2K 4K 6K 8K 10K 8782.71 8803.33 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Mixed Scheduler a b 2K 4K 6K 8K 10K 8305.74 8320.26 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Socket Activity a b 1600 3200 4800 6400 8000 7209.11 7240.74 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Wide Vector Math a b 130K 260K 390K 520K 650K 587699.43 584000.31 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Context Switching a b 600K 1200K 1800K 2400K 3000K 2670705.05 2690991.53 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Fused Multiply-Add a b 3M 6M 9M 12M 15M 13209967.50 13224332.05 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Vector Floating Point a b 8K 16K 24K 32K 40K 35283.23 35339.57 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Glibc C String Functions a b 3M 6M 9M 12M 15M 13058517.39 12751183.72 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: Glibc Qsort Data Sorting a b 80 160 240 320 400 370.88 370.05 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.16.04 Test: System V Message Passing a b 2M 4M 6M 8M 10M 8498512.20 8490199.38 1. (CXX) g++ options: -lm -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lmpfr -lpthread -lrt -lsctp -lz
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 a b 0.954 1.908 2.862 3.816 4.77 4.24 4.23 MIN: 4.18 / MAX: 4.68 MIN: 4.15 / MAX: 6.3 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 a b 0.819 1.638 2.457 3.276 4.095 3.64 3.62 MIN: 3.58 / MAX: 4.05 MIN: 3.57 / MAX: 4.05 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: shufflenet-v2 a b 1.0395 2.079 3.1185 4.158 5.1975 4.62 4.60 MIN: 4.57 / MAX: 5.08 MIN: 4.55 / MAX: 8.06 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: mnasnet a b 0.8573 1.7146 2.5719 3.4292 4.2865 3.81 3.81 MIN: 3.75 / MAX: 4.2 MIN: 3.75 / MAX: 4.27 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: efficientnet-b0 a b 2 4 6 8 10 6.21 6.18 MIN: 6.16 / MAX: 6.97 MIN: 6.13 / MAX: 6.73 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: blazeface a b 0.396 0.792 1.188 1.584 1.98 1.76 1.72 MIN: 1.73 / MAX: 2.01 MIN: 1.69 / MAX: 1.82 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: googlenet a b 4 8 12 16 20 14.18 14.03 MIN: 13.17 / MAX: 15.43 MIN: 13.2 / MAX: 15.03 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vgg16 a b 12 24 36 48 60 52.11 52.55 MIN: 51.29 / MAX: 60.85 MIN: 51.15 / MAX: 145.81 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet18 a b 3 6 9 12 15 9.75 9.82 MIN: 9.52 / MAX: 10.76 MIN: 9.51 / MAX: 10.84 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: alexnet a b 3 6 9 12 15 9.14 9.23 MIN: 8.83 / MAX: 11.56 MIN: 8.84 / MAX: 9.98 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet50 a b 4 8 12 16 20 18.14 17.98 MIN: 17.92 / MAX: 19.11 MIN: 17.79 / MAX: 19.31 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: yolov4-tiny a b 6 12 18 24 30 24.32 24.27 MIN: 24.14 / MAX: 25.12 MIN: 24.09 / MAX: 25.04 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: squeezenet_ssd a b 3 6 9 12 15 11.93 11.91 MIN: 11.68 / MAX: 12.5 MIN: 11.64 / MAX: 12.21 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: regnety_400m a b 3 6 9 12 15 10.30 10.14 MIN: 10.22 / MAX: 10.77 MIN: 10.05 / MAX: 11.23 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vision_transformer a b 16 32 48 64 80 70.77 70.42 MIN: 70.53 / MAX: 71.58 MIN: 70.1 / MAX: 77.29 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: FastestDet a b 1.1498 2.2996 3.4494 4.5992 5.749 5.11 5.09 MIN: 5.08 / MAX: 5.23 MIN: 5.04 / MAX: 5.14 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU a b 0.7065 1.413 2.1195 2.826 3.5325 3.14 3.14 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU a b 400 800 1200 1600 2000 1907.31 1894.17 MIN: 1845.25 / MAX: 1981.52 MIN: 1783.3 / MAX: 1991.75 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU a b 6 12 18 24 30 24.67 24.42 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU a b 50 100 150 200 250 242.85 245.33 MIN: 157.98 / MAX: 275.6 MIN: 164.54 / MAX: 277.99 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU a b 6 12 18 24 30 24.7 24.3 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU a b 50 100 150 200 250 242.84 246.67 MIN: 191.04 / MAX: 270.77 MIN: 191.91 / MAX: 277.67 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU a b 30 60 90 120 150 150.52 149.62 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU a b 9 18 27 36 45 39.83 40.08 MIN: 21.84 / MAX: 48.24 MIN: 30.77 / MAX: 49.12 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU a b 0.9675 1.935 2.9025 3.87 4.8375 4.3 4.3 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU a b 300 600 900 1200 1500 1389.42 1388.86 MIN: 1339.19 / MAX: 1419.26 MIN: 1363.36 / MAX: 1429.32 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU a b 150 300 450 600 750 674.92 678.10 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU a b 2 4 6 8 10 8.87 8.83 MIN: 4.07 / MAX: 24.04 MIN: 4.02 / MAX: 23.9 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU a b 9 18 27 36 45 39.08 39.27 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU a b 30 60 90 120 150 153.43 152.68 MIN: 59.59 / MAX: 199.32 MIN: 87.43 / MAX: 179.71 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU a b 80 160 240 320 400 354.56 354.06 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU a b 4 8 12 16 20 16.91 16.93 MIN: 9.89 / MAX: 26.56 MIN: 9.9 / MAX: 26.26 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU a b 70 140 210 280 350 302.35 303.21 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU a b 5 10 15 20 25 19.82 19.76 MIN: 18.83 / MAX: 29.95 MIN: 18.68 / MAX: 39.87 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU a b 200 400 600 800 1000 1065.97 1067.32 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU a b 1.2645 2.529 3.7935 5.058 6.3225 5.62 5.61 MIN: 4.32 / MAX: 13.99 MIN: 4.02 / MAX: 14.15 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU a b 40 80 120 160 200 158.63 158.31 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU a b 9 18 27 36 45 37.79 37.87 MIN: 32.26 / MAX: 49.53 MIN: 28.43 / MAX: 64.79 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU a b 8 16 24 32 40 34.38 34.87 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU a b 40 80 120 160 200 174.33 171.89 MIN: 158.35 / MAX: 200.36 MIN: 141.31 / MAX: 194.06 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b 90 180 270 360 450 424.82 425.10 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b 7 14 21 28 35 28.23 28.21 MIN: 22.9 / MAX: 46.18 MIN: 20.68 / MAX: 35.43 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU a b 70 140 210 280 350 328.80 325.14 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU a b 5 10 15 20 25 18.23 18.44 MIN: 16.05 / MAX: 25.61 MIN: 11.58 / MAX: 32.36 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU a b 30 60 90 120 150 129.11 130.15 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU a b 20 40 60 80 100 92.88 92.11 MIN: 70.91 / MAX: 128.35 MIN: 76.5 / MAX: 126.89 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b 2K 4K 6K 8K 10K 9800.16 9776.02 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b 0.2745 0.549 0.8235 1.098 1.3725 1.22 1.22 MIN: 0.7 / MAX: 10.62 MIN: 0.67 / MAX: 3.57 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU a b 30 60 90 120 150 134.26 134.30 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU a b 20 40 60 80 100 89.28 89.31 MIN: 78.99 / MAX: 107.97 MIN: 78.81 / MAX: 108.46 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b 3K 6K 9K 12K 15K 14175.70 14139.29 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b 0.189 0.378 0.567 0.756 0.945 0.84 0.84 MIN: 0.52 / MAX: 3.12 MIN: 0.52 / MAX: 8.49 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
nginx This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 100 a b 20K 40K 60K 80K 100K 79156.02 78192.19 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 200 a b 16K 32K 48K 64K 80K 76304.80 76041.37 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 500 a b 16K 32K 48K 64K 80K 72766.53 72403.86 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 1000 a b 14K 28K 42K 56K 70K 63534.92 62870.64 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
Apache HTTP Server This is a test of the Apache HTTPD web server. This Apache HTTPD web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients. Learn more via the OpenBenchmarking.org test page.
Concurrent Requests: 100
a: The test quit with a non-zero exit status. E: ./apache: 2: ./wrk-4.2.0/wrk: not found
b: The test quit with a non-zero exit status. E: ./apache: 2: ./wrk-4.2.0/wrk: not found
Concurrent Requests: 200
a: The test quit with a non-zero exit status. E: ./apache: 2: ./wrk-4.2.0/wrk: not found
b: The test quit with a non-zero exit status. E: ./apache: 2: ./wrk-4.2.0/wrk: not found
Concurrent Requests: 500
a: The test quit with a non-zero exit status. E: ./apache: 2: ./wrk-4.2.0/wrk: not found
b: The test quit with a non-zero exit status. E: ./apache: 2: ./wrk-4.2.0/wrk: not found
Concurrent Requests: 1000
a: The test quit with a non-zero exit status. E: ./apache: 2: ./wrk-4.2.0/wrk: not found
b: The test quit with a non-zero exit status. E: ./apache: 2: ./wrk-4.2.0/wrk: not found
Whisper.cpp Whisper.cpp is a port of OpenAI's Whisper model in C/C++. Whisper.cpp is developed by Georgi Gerganov for transcribing WAV audio files to text / speech recognition. Whisper.cpp supports ARM NEON, x86 AVX, and other advanced CPU features. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Whisper.cpp 1.4 Model: ggml-base.en - Input: 2016 State of the Union a b 40 80 120 160 200 160.66 159.87 1. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread
OpenBenchmarking.org Seconds, Fewer Is Better Whisper.cpp 1.4 Model: ggml-small.en - Input: 2016 State of the Union a b 100 200 300 400 500 475.27 471.32 1. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread
OpenBenchmarking.org Seconds, Fewer Is Better Whisper.cpp 1.4 Model: ggml-medium.en - Input: 2016 State of the Union a b 300 600 900 1200 1500 1447.54 1472.87 1. (CXX) g++ options: -O3 -std=c++11 -fPIC -pthread
BRL-CAD BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.36 VGR Performance Metric a b 40K 80K 120K 160K 200K 189733 190422 1. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8701021Graphics Notes: BAR1 / Visible vRAM Size: 256 MB - vBIOS Version: 113-D0500100-102Java Notes: OpenJDK Runtime Environment (build 11.0.20.1+1-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 30 October 2023 06:46 by user phoronix.
b Processor: AMD Ryzen 9 3900XT 12-Core @ 3.80GHz (12 Cores / 24 Threads), Motherboard: MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.B3 BIOS), Chipset: AMD Starship/Matisse, Memory: 16GB, Disk: 500GB Seagate FireCuda 520 SSD ZP500GM30002, Graphics: AMD Radeon RX 56/64 8GB (1630/945MHz), Audio: AMD Vega 10 HDMI Audio, Monitor: ASUS MG28U, Network: Realtek Device 2600 + Realtek Killer E3000 2.5GbE + Intel Wi-Fi 6 AX200
OS: Ubuntu 22.04, Kernel: 6.2.0-35-generic (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.49), Vulkan: 1.3.204, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 3840x2160
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8701021Graphics Notes: BAR1 / Visible vRAM Size: 256 MB - vBIOS Version: 113-D0500100-102Java Notes: OpenJDK Runtime Environment (build 11.0.20.1+1-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.12Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 30 October 2023 17:11 by user phoronix.