AMD Ryzen Threadripper 3960X 24-Core testing with a MSI Creator TRX40 (MS-7C59) v1.0 (1.12N1 BIOS) and Gigabyte AMD Radeon RX 5500 XT 8GB on Ubuntu 22.04 via the Phoronix Test Suite.
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8301025Graphics Notes: BAR1 / Visible vRAM Size: 256 MB - vBIOS Version: xxx-xxx-xxxJava Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
b Processor: AMD Ryzen Threadripper 3960X 24-Core @ 3.80GHz (24 Cores / 48 Threads), Motherboard: MSI Creator TRX40 (MS-7C59) v1.0 (1.12N1 BIOS), Chipset: AMD Starship/Matisse, Memory: 32GB, Disk: 1000GB Sabrent Rocket 4.0 1TB, Graphics: Gigabyte AMD Radeon RX 5500 XT 8GB (1900/875MHz), Audio: AMD Navi 10 HDMI Audio, Monitor: VA2431, Network: Aquantia AQC107 NBase-T/IEEE + Intel I211 + Intel Wi-Fi 6 AX200
OS: Ubuntu 22.04, Kernel: 5.19.0-051900rc7-generic (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server, OpenGL: 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.47), Vulkan: 1.3.204, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 1920x1080
threadripper eo 2022 OpenBenchmarking.org Phoronix Test Suite AMD Ryzen Threadripper 3960X 24-Core @ 3.80GHz (24 Cores / 48 Threads) MSI Creator TRX40 (MS-7C59) v1.0 (1.12N1 BIOS) AMD Starship/Matisse 32GB 1000GB Sabrent Rocket 4.0 1TB Gigabyte AMD Radeon RX 5500 XT 8GB (1900/875MHz) AMD Navi 10 HDMI Audio VA2431 Aquantia AQC107 NBase-T/IEEE + Intel I211 + Intel Wi-Fi 6 AX200 Ubuntu 22.04 5.19.0-051900rc7-generic (x86_64) GNOME Shell 42.2 X Server 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.47) 1.3.204 GCC 11.2.0 ext4 1920x1080 Processor Motherboard Chipset Memory Disk Graphics Audio Monitor Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution Threadripper Eo 2022 Benchmarks System Logs - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096 - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8301025 - BAR1 / Visible vRAM Size: 256 MB - vBIOS Version: xxx-xxx-xxx - OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1) - Python 3.10.4 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
a vs. b Comparison Phoronix Test Suite Baseline +19.4% +19.4% +38.8% +38.8% +58.2% +58.2% +77.6% +77.6% 37.9% 15.2% 7% 6.9% 5.7% 5.6% 5% 4.4% 4.4% 4.1% 3.3% 2.9% 2.9% 2.9% 2.8% 2.7% 2.7% 2.6% 2.6% 2.5% 2.5% 2.5% 2.4% 2.3% 2.2% 2.1% 100 - 250 - Read Write - Average Latency 77.7% 100 - 250 - Read Write 77.7% 480000 - 512 51.4% 44100 - 512 45.3% M.M.B.S.T - u8s8f32 - CPU IO_uring 30.4% GET - 50 23.1% 100 - 100 - Read Write 20.4% 100 - 100 - Read Write - Average Latency 20.4% IP Shapes 3D - u8s8f32 - CPU 19.2% 1920 x 1080 - Ultra Preset 12 - Bosphorus 4K 13.8% R.N.N.I - f32 - CPU 12% SET - 50 9.8% inception-v3 9.7% 100 - 100 - Read Only - Average Latency 8.5% 100 - 100 - Read Only 8.5% Futex 8.3% mobilenetV3 7.6% CPU - resnet50 7.6% 192000 - 1024 7.4% Default CPU Cache CPU - googlenet 6.8% R.N.N.T - bf16bf16bf16 - CPU 6.4% R.N.N.T - u8s8f32 - CPU 6.3% R.N.N.I - bf16bf16bf16 - CPU 5.9% resnet-v2-50 5.9% R.N.N.I - u8s8f32 - CPU 5.9% 1000000 - 100 - S.5.B.T 5.8% 480000 - 1024 CPU - squeezenet_ssd 5.6% MobileNetV2_224 5.6% 192000 - 512 CPU - alexnet 5.6% R.N.N.T - f32 - CPU 5.5% CPU - resnet18 5.3% Seq Fill 5.2% N.T.C.B.b.u.S - S.S.S 5.1% N.T.C.B.b.u.S - S.S.S 5.1% M.M.B.S.T - f32 - CPU Update Rand 5% 1000000 - 100 - B.I.J.T.T 4.7% 100 - 250 - Read Only - Average Latency 4.7% nasnet 4.5% 1000000 - 100 - R.T.T 4.5% 100 - 250 - Read Only 4.5% CPU - mobilenet 4.5% 96000 - 1024 C.B.S.A - f32 - CPU 4.4% HWB Color Space D.R 4.4% Noise-Gaussian CPU - mnasnet 4% R.R.W.R 3.7% KNN CAD 3.4% CPU Stress CPU - efficientnet-b0 3.2% 4.P.1.P.S.2.Q 3.2% CPU - shufflenet-v2 3.2% D.B.s - u8s8f32 - CPU 3.1% CPU - vision_transformer 3.1% CPU-v2-v2 - mobilenet-v2 3% 4.P.1.P.S.2.Q 3% F.D.F - CPU CPU - yolov4-tiny 2.9% squeezenetv1.1 500 Speed 6 Realtime - Bosphorus 4K 2.8% CPU-v3-v3 - mobilenet-v3 2.8% OFDM_Test N.Q.A.B.b.u.S.1.P - A.M.S 2.7% N.Q.A.B.b.u.S.1.P - S.S.S N.Q.A.B.b.u.S.1.P - S.S.S N.Q.A.B.b.u.S.1.P - A.M.S 2.6% F.D.F - CPU 1.R.W.A.D.F.R.C.C GET - 500 2.6% All D.B.s - f32 - CPU 2.5% IP Shapes 1D - u8s8f32 - CPU 2.5% 1000000 - 100 - I.J.T.T 1920 x 1080 - High IP Shapes 1D - f32 - CPU 2.4% 1 1000000 - 100 - Group By Test Time 2.4% M.T.E.T.D.F - CPU 2.3% Socket Activity M.T.E.T.D.F - CPU 2.3% NUMA 2.3% Speed 8 Realtime - Bosphorus 1080p CPU - blazeface 2.2% Preset 4 - Bosphorus 1080p 2.1% Rand Read Windowed Gaussian 2.1% B.C 2% PostgreSQL PostgreSQL Stargate Digital Audio Workstation Stargate Digital Audio Workstation oneDNN Stress-NG Redis PostgreSQL PostgreSQL oneDNN Unvanquished SVT-AV1 oneDNN Redis Mobile Neural Network PostgreSQL PostgreSQL Stress-NG Mobile Neural Network NCNN Stargate Digital Audio Workstation WebP2 Image Encode Stress-NG NCNN oneDNN oneDNN oneDNN Mobile Neural Network oneDNN Apache Spark Stargate Digital Audio Workstation NCNN Mobile Neural Network Stargate Digital Audio Workstation NCNN oneDNN NCNN Facebook RocksDB Neural Magic DeepSparse Neural Magic DeepSparse oneDNN Facebook RocksDB Apache Spark PostgreSQL Mobile Neural Network Apache Spark PostgreSQL NCNN Stargate Digital Audio Workstation oneDNN GraphicsMagick 7-Zip Compression GraphicsMagick NCNN Facebook RocksDB Numenta Anomaly Benchmark Stress-NG NCNN srsRAN NCNN oneDNN NCNN NCNN srsRAN OpenVINO NCNN Mobile Neural Network nginx AOM AV1 NCNN srsRAN Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse OpenVINO ClickHouse Redis JPEG XL Decoding libjxl oneDNN oneDNN Apache Spark Unvanquished oneDNN JPEG XL Decoding libjxl Apache Spark OpenVINO Stress-NG OpenVINO Stress-NG AOM AV1 NCNN SVT-AV1 Facebook RocksDB Numenta Anomaly Benchmark Numenta Anomaly Benchmark a b
threadripper eo 2022 stress-ng: MMAP stress-ng: NUMA stress-ng: Futex stress-ng: MEMFD stress-ng: Mutex stress-ng: Atomic stress-ng: Crypto stress-ng: Malloc stress-ng: Forking stress-ng: IO_uring stress-ng: SENDFILE stress-ng: CPU Cache stress-ng: CPU Stress stress-ng: Semaphores stress-ng: Matrix Math stress-ng: Vector Math stress-ng: Memory Copying stress-ng: Socket Activity stress-ng: Context Switching stress-ng: Glibc C String Functions stress-ng: Glibc Qsort Data Sorting stress-ng: System V Message Passing srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAM srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAM srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM nekrs: TurboPipe Periodic ffmpeg: libx264 - Live ffmpeg: libx265 - Live ffmpeg: libx264 - Upload ffmpeg: libx265 - Upload ffmpeg: libx264 - Platform ffmpeg: libx265 - Platform ffmpeg: libx264 - Video On Demand ffmpeg: libx265 - Video On Demand openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU natron: Spaceship unvanquished: 1920 x 1080 - High unvanquished: 1920 x 1080 - Ultra unvanquished: 1920 x 1080 - Medium aom-av1: Speed 0 Two-Pass - Bosphorus 4K aom-av1: Speed 4 Two-Pass - Bosphorus 4K aom-av1: Speed 6 Realtime - Bosphorus 4K aom-av1: Speed 6 Two-Pass - Bosphorus 4K aom-av1: Speed 8 Realtime - Bosphorus 4K aom-av1: Speed 9 Realtime - Bosphorus 4K aom-av1: Speed 10 Realtime - Bosphorus 4K aom-av1: Speed 0 Two-Pass - Bosphorus 1080p aom-av1: Speed 4 Two-Pass - Bosphorus 1080p aom-av1: Speed 6 Realtime - Bosphorus 1080p aom-av1: Speed 6 Two-Pass - Bosphorus 1080p aom-av1: Speed 8 Realtime - Bosphorus 1080p aom-av1: Speed 9 Realtime - Bosphorus 1080p aom-av1: Speed 10 Realtime - Bosphorus 1080p rav1e: 1 rav1e: 5 rav1e: 6 rav1e: 10 svt-av1: Preset 4 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 13 - Bosphorus 4K svt-av1: Preset 4 - Bosphorus 1080p svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 12 - Bosphorus 1080p svt-av1: Preset 13 - Bosphorus 1080p xmrig: Monero - 1M xmrig: Wownero - 1M tensorflow: CPU - 16 - AlexNet tensorflow: CPU - 16 - GoogLeNet openvkl: vklBenchmark ISPC openvkl: vklBenchmark Scalar deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream graphics-magick: Swirl graphics-magick: Rotate graphics-magick: Sharpen graphics-magick: Enhanced graphics-magick: Resizing graphics-magick: Noise-Gaussian graphics-magick: HWB Color Space stream: Copy stream: Scale stream: Triad stream: Add smhasher: wyhash smhasher: SHA3-256 smhasher: Spooky32 smhasher: fasthash32 smhasher: FarmHash128 smhasher: t1ha2_atonce smhasher: FarmHash32 x86_64 AVX smhasher: t1ha0_aes_avx2 x86_64 smhasher: MeowHash x86_64 AES-NI compress-7zip: Compression Rating compress-7zip: Decompression Rating jpegxl: PNG - 80 jpegxl: PNG - 90 jpegxl: JPEG - 80 jpegxl: JPEG - 90 jpegxl: PNG - 100 jpegxl: JPEG - 100 jpegxl-decode: 1 jpegxl-decode: All webp: Default webp: Quality 100 webp: Quality 100, Lossless webp: Quality 100, Highest Compression webp: Quality 100, Lossless, Highest Compression webp2: Default webp2: Quality 75, Compression Effort 7 webp2: Quality 95, Compression Effort 7 webp2: Quality 100, Compression Effort 5 astcenc: Fast astcenc: Medium astcenc: Thorough astcenc: Exhaustive rocksdb: Rand Fill rocksdb: Rand Read rocksdb: Update Rand rocksdb: Seq Fill rocksdb: Rand Fill Sync rocksdb: Read While Writing rocksdb: Read Rand Write Rand cockroach: MoVR - 128 cockroach: MoVR - 256 cockroach: MoVR - 512 cockroach: MoVR - 1024 cockroach: KV, 10% Reads - 128 cockroach: KV, 10% Reads - 256 cockroach: KV, 10% Reads - 512 cockroach: KV, 50% Reads - 128 cockroach: KV, 50% Reads - 256 cockroach: KV, 50% Reads - 512 cockroach: KV, 60% Reads - 128 cockroach: KV, 60% Reads - 256 cockroach: KV, 60% Reads - 512 cockroach: KV, 95% Reads - 128 cockroach: KV, 95% Reads - 256 cockroach: KV, 95% Reads - 512 cockroach: KV, 10% Reads - 1024 cockroach: KV, 50% Reads - 1024 cockroach: KV, 60% Reads - 1024 cockroach: KV, 95% Reads - 1024 dragonflydb: 50 - 1:1 dragonflydb: 50 - 1:5 dragonflydb: 50 - 5:1 dragonflydb: 200 - 1:1 dragonflydb: 200 - 1:5 dragonflydb: 200 - 5:1 clickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cache clickhouse: 100M Rows Web Analytics Dataset, Second Run clickhouse: 100M Rows Web Analytics Dataset, Third Run stargate: 44100 - 512 stargate: 96000 - 512 stargate: 192000 - 512 stargate: 44100 - 1024 stargate: 480000 - 512 stargate: 96000 - 1024 stargate: 192000 - 1024 stargate: 480000 - 1024 redis: GET - 50 redis: SET - 50 redis: GET - 500 redis: SET - 500 nginx: 100 nginx: 200 nginx: 500 nginx: 1000 srsran: OFDM_Test spacy: en_core_web_lg spacy: en_core_web_trf pgbench: 100 - 50 - Read Only pgbench: 100 - 100 - Read Only pgbench: 100 - 250 - Read Only pgbench: 100 - 50 - Read Write pgbench: 100 - 100 - Read Write pgbench: 100 - 250 - Read Write srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAM srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAM srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM brl-cad: VGR Performance Metric smhasher: wyhash smhasher: SHA3-256 smhasher: Spooky32 smhasher: fasthash32 smhasher: FarmHash128 smhasher: t1ha2_atonce smhasher: FarmHash32 x86_64 AVX smhasher: t1ha0_aes_avx2 x86_64 smhasher: MeowHash x86_64 AES-NI onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Recurrent Neural Network Inference - u8s8f32 - CPU onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU pgbench: 100 - 50 - Read Only - Average Latency pgbench: 100 - 100 - Read Only - Average Latency pgbench: 100 - 250 - Read Only - Average Latency pgbench: 100 - 50 - Read Write - Average Latency pgbench: 100 - 100 - Read Write - Average Latency pgbench: 100 - 250 - Read Write - Average Latency mnn: nasnet mnn: mobilenetV3 mnn: squeezenetv1.1 mnn: resnet-v2-50 mnn: SqueezeNetV1.0 mnn: MobileNetV2_224 mnn: mobilenet-v1-1.0 mnn: inception-v3 ncnn: CPU - mobilenet ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU - shufflenet-v2 ncnn: CPU - mnasnet ncnn: CPU - efficientnet-b0 ncnn: CPU - blazeface ncnn: CPU - googlenet ncnn: CPU - vgg16 ncnn: CPU - resnet18 ncnn: CPU - alexnet ncnn: CPU - resnet50 ncnn: CPU - yolov4-tiny ncnn: CPU - squeezenet_ssd ncnn: CPU - regnety_400m ncnn: CPU - vision_transformer ncnn: CPU - FastestDet openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream unpack-linux: linux-5.19.tar.xz openfoam: motorBike - Mesh Time openfoam: motorBike - Execution Time openfoam: drivaerFastback, Small Mesh Size - Mesh Time openfoam: drivaerFastback, Small Mesh Size - Execution Time openradioss: Bumper Beam openradioss: Cell Phone Drop Test openradioss: Bird Strike on Windshield openradioss: Rubber O-Ring Seal Installation openradioss: INIVOL and Fluid Structure Interaction Drop Container avifenc: 0 avifenc: 2 avifenc: 6 avifenc: 6, Lossless avifenc: 10, Lossless build-linux-kernel: defconfig build-linux-kernel: allmodconfig build-nodejs: Time To Compile build-php: Time To Compile build-python: Default build-python: Released Build, PGO + LTO Optimized y-cruncher: 1B y-cruncher: 500M build-erlang: Time To Compile build-wasmer: Time To Compile encode-flac: WAV To FLAC ffmpeg: libx264 - Live ffmpeg: libx265 - Live ffmpeg: libx264 - Upload ffmpeg: libx265 - Upload ffmpeg: libx264 - Platform ffmpeg: libx265 - Platform ffmpeg: libx264 - Video On Demand ffmpeg: libx265 - Video On Demand spark: 1000000 - 100 - SHA-512 Benchmark Time spark: 1000000 - 100 - Calculate Pi Benchmark spark: 1000000 - 100 - Calculate Pi Benchmark Using Dataframe spark: 1000000 - 100 - Group By Test Time spark: 1000000 - 100 - Repartition Test Time spark: 1000000 - 100 - Inner Join Test Time spark: 1000000 - 100 - Broadcast Inner Join Test Time blender: BMW27 - CPU-Only blender: Classroom - CPU-Only blender: Fishy Cat - CPU-Only blender: Barbershop - CPU-Only blender: Pabellon Barcelona - CPU-Only numenta-nab: KNN CAD numenta-nab: Relative Entropy numenta-nab: Windowed Gaussian numenta-nab: Earthgecko Skyline numenta-nab: Bayesian Changepoint numenta-nab: Contextual Anomaly Detector OSE encodec: 3 kbps encodec: 6 kbps encodec: 24 kbps encodec: 1.5 kbps scikit-learn: MNIST Dataset scikit-learn: TSNE MNIST Dataset scikit-learn: Sparse Rand Projections, 100 Iterations a b 383.32 604.44 3430545.94 891.86 12785778.73 421253.99 44751.55 50825608.57 53674.14 22416.27 414239.19 174.95 63007.36 4869829.14 140597.25 178859.7 4863.37 15355.92 9618965.78 4187483.04 385.8 9343696.19 376.6 376.6 402.5 413.1 106.8 67648400000 198.58 69.22 12.37 13.73 47.13 28.62 47.17 28.63 6.82 4.3 4.33 380.28 8.46 715.98 661.93 63.47 843.69 686.15 21060.95 22553.27 4.2 257 248.6 260 0.34 7.91 25.31 13.54 31.97 40.37 40.29 0.96 14.13 43.08 34.6 60.72 72.59 72.52 0.78 2.741 3.656 8.018 3.5 56.689 147.341 145.752 8.046 111.927 400.811 359.56 15342.8 21153.3 62.47 44.45 288 184 19.6428 13.3862 69.4812 26.7006 107.4771 59.8698 222.1611 112.6594 154.8854 83.6707 78.7202 43.1635 19.5601 13.3472 1178 638 376 568 2129 514 1043 54649.6 33038 36787.1 36849.4 23068.13 146.7 14254.7 6591.27 15823.93 15569.87 27249.95 66642.16 37533.1 167681 177640 8.95 8.89 8.59 8.51 0.68 0.66 44.6 245.68 17.87 11.10 1.50 3.44 0.61 9.6 0.30 0.15 8.39 366.3281 120.3254 14.4059 1.5739 811216 98049490 684330 922415 25136 4506140 2683538 477.5 476.5 476.6 474.9 50479.2 61211.1 61112 70558.3 77546.7 75643.3 77055.3 82286.1 80024 103073.5 100761.5 98031.9 59301.4 71952.1 76564 94157.8 3341824.71 3519377.56 3193072.56 3388128.93 3609156.11 3273370.07 196.74 236.78 240.17 3.892921 1.911816 1.244804 5.223813 4.122029 3.681535 2.436915 4.917282 2481064.75 1746939.25 2271735.25 1784408 148017.08 148446.23 133262.41 125263.22 140100000 12454 1465 671452 653213 646832 29212 40535 45407 150.6 162.9 160.2 175.5 60.2 405453 26.329 2649.376 51.661 37.175 63.803 34.763 43.284 34.721 57.723 1.50644 5.89944 1.43493 0.652318 9.02473 6.06539 2.63187 9.16751 1.68952 2.06629 2614.46 1295.49 2559.09 1366.14 4.48456 2604.63 1361.72 6.54191 0.074 0.153 0.386 1.712 2.467 5.506 14.311 2.282 4.311 21.564 6.491 4.361 3.054 23.254 15.69 6.92 6.44 7.89 6.29 9.26 3.67 18.44 33.71 12.14 8.95 21.43 25.55 21.24 26.61 133.48 8.91 1735.28 2734.68 2750.06 31.53 1410.67 16.75 18.11 188.92 28.43 17.47 1.13 1.06 608.4782 74.6969 172.6456 37.4433 111.3907 16.6914 53.9562 8.8688 77.4472 11.9444 152.2314 23.1605 611.8922 74.9151 7.006 40.3729 71.6006 26.871733 121.45626 98.86 54.86 173.48 78.18 287.95 91.039 47.559 4.012 7.607 5.153 43.944 476.667 232.345 43.61 15.429 257.57 22.825 11.143 90.928 46.857 17.802 25.43 72.95 204.11375323 183.908496366 160.71 264.66 160.60 264.63 3.40 68.960273121 4.20 4.67 1.76 1.67 1.28 53.91 148.83 69.44 583.81 176.4 129.147 13.21 6.199 77.22 20.148 38.983 38.187 37.604 42.075 36.013 103.896 31.586 162.422 382.19 591.09 3166174.89 887.76 12810594.01 421495.46 44653.67 50473178.15 52953.36 17193.98 411184.4 186.95 65071.9 4868774.73 138081.21 178430.01 4839.31 15707.68 9774697.61 4118591.13 385.67 9276368.69 375.1 375.9 403.8 401.2 107.1 67381400000 200.46 69.15 12.30 13.89 46.96 28.68 47.04 28.74 7.02 4.34 4.31 376.7 8.48 712.18 653.16 62.04 844.82 692.14 20837.59 22417.03 4.2 263.3 286.5 262.8 0.34 7.88 24.61 13.57 32.16 40.93 40.76 0.97 14.23 43.34 34.15 62.08 73.28 73.76 0.778 2.748 3.641 7.997 3.521 56.487 129.515 148.194 7.877 113.858 403.432 359.18 15334.8 21018.6 63.15 44.39 287 183 19.5842 13.3177 67.6453 27.4093 106.8673 59.5355 221.8827 113.5482 154.7593 83.5943 78.4402 41.0795 19.5237 13.326 1193 649 376 570 2132 535 1089 55314 33173.5 37021.8 37116 22791.83 146.47 14206.73 6553.7 15708.02 15465.65 26861.97 65731.67 37302 168680 170171 9.02 8.99 8.7 8.58 0.68 0.66 45.67 251.91 17.78 11.04 1.52 3.41 0.61 10.27 0.30 0.15 8.43 364.3208 120.0841 14.3759 1.5693 800273 100100766 651892 876834 25297 4452658 2589021 483.1 480.7 481.1 480.5 50108.8 61278.6 60710 69948.7 76768.6 75289.1 77363.2 82467 79220.4 103111.7 100166.8 97822.4 59175.4 72222 76335.4 93822.1 3320752.26 3515736.22 3164997.08 3388656.73 3593257.69 3263145.89 201.86 234.64 235.82 2.678564 1.947376 1.314658 5.307245 2.722708 3.845343 2.268748 5.196953 2015000.12 1590308.75 2214977.25 1797429.5 145523.99 148101.85 137072.73 126633 144000000 12319 1463 663640 602288 618774 29318 33658 25548 150.1 162.2 160.7 170 60.2 403757 26.387 2649.302 51.627 37.264 63.864 35.019 43.29 35.155 57.928 1.54322 5.91961 1.47047 0.7778 9.42618 6.16553 2.69706 9.15733 1.74252 2.09863 2759.2 1451.46 2719.77 1446.12 4.27034 2772.01 1441.78 4.7447 0.075 0.166 0.404 1.705 2.971 9.786 14.962 2.455 4.191 22.828 6.547 4.607 3.021 25.51 16.39 7.13 6.62 8.14 6.54 9.56 3.75 19.69 34.14 12.78 9.45 23.05 26.29 22.44 26.65 137.59 8.9 1690.53 2714.24 2734.21 31.82 1408.6 16.84 18.36 193.23 28.4 17.32 1.14 1.06 610.7512 75.0804 177.2167 36.475 112.0146 16.7848 54.0192 8.7988 77.5099 11.9559 152.9447 24.336 610.9332 75.0343 7.096 40.5873 71.6125 27.275142 123.71063 100.6 55.19 174.41 79.08 290.2 90.561 47.639 4.008 7.571 5.078 44.003 478.887 233.722 43.699 15.41 257.945 22.916 11.222 90.249 46.955 17.963 25.19 73.03 205.24 181.78 161.32 264.16 161.04 263.53 3.59872799 69.93 4.21 4.78 1.84 1.63 1.34 53.95 149.1 69.59 586 177.27 133.475 13.106 6.327 77.408 20.56 39.16 38.433 37.903 42.307 36.372 104.931 31.926 162.588 OpenBenchmarking.org
Stress-NG Stress-NG is a Linux stress tool developed by Colin King of Canonical. Learn more via the OpenBenchmarking.org test page.
Test: x86_64 RdRand
a: The test run did not produce a result. E: stress-ng: error: [3063617] No stress workers invoked (one or more were unsupported)
b: The test run did not produce a result. E: stress-ng: error: [1562950] No stress workers invoked (one or more were unsupported)
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: MMAP a b 80 160 240 320 400 383.32 382.19 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: NUMA a b 130 260 390 520 650 604.44 591.09 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Futex a b 700K 1400K 2100K 2800K 3500K 3430545.94 3166174.89 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: MEMFD a b 200 400 600 800 1000 891.86 887.76 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Mutex b a 3M 6M 9M 12M 15M 12810594.01 12785778.73 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Atomic b a 90K 180K 270K 360K 450K 421495.46 421253.99 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Crypto a b 10K 20K 30K 40K 50K 44751.55 44653.67 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Malloc a b 11M 22M 33M 44M 55M 50825608.57 50473178.15 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Forking a b 11K 22K 33K 44K 55K 53674.14 52953.36 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: IO_uring a b 5K 10K 15K 20K 25K 22416.27 17193.98 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: SENDFILE a b 90K 180K 270K 360K 450K 414239.19 411184.40 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: CPU Cache b a 40 80 120 160 200 186.95 174.95 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: CPU Stress b a 14K 28K 42K 56K 70K 65071.90 63007.36 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Semaphores a b 1000K 2000K 3000K 4000K 5000K 4869829.14 4868774.73 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Matrix Math a b 30K 60K 90K 120K 150K 140597.25 138081.21 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Vector Math a b 40K 80K 120K 160K 200K 178859.70 178430.01 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Memory Copying a b 1000 2000 3000 4000 5000 4863.37 4839.31 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Socket Activity b a 3K 6K 9K 12K 15K 15707.68 15355.92 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Context Switching b a 2M 4M 6M 8M 10M 9774697.61 9618965.78 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Glibc C String Functions a b 900K 1800K 2700K 3600K 4500K 4187483.04 4118591.13 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Glibc Qsort Data Sorting a b 80 160 240 320 400 385.80 385.67 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: System V Message Passing a b 2M 4M 6M 8M 10M 9343696.19 9276368.69 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lGLESv2 -ljpeg -lrt -lsctp -lz -pthread
srsRAN srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM a b 80 160 240 320 400 376.6 375.1 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM a b 80 160 240 320 400 376.6 375.9 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM b a 90 180 270 360 450 403.8 402.5 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM a b 90 180 270 360 450 413.1 401.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM b a 20 40 60 80 100 107.1 106.8 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
nekRS nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FLOP/s, More Is Better nekRS 22.0 Input: TurboPipe Periodic a b 14000M 28000M 42000M 56000M 70000M 67648400000 67381400000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi
FFmpeg This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Live b a 40 80 120 160 200 200.46 198.58 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Live a b 15 30 45 60 75 69.22 69.15 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Upload a b 3 6 9 12 15 12.37 12.30 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Upload b a 4 8 12 16 20 13.89 13.73 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Platform a b 11 22 33 44 55 47.13 46.96 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Platform b a 7 14 21 28 35 28.68 28.62 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Video On Demand a b 11 22 33 44 55 47.17 47.04 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Video On Demand b a 7 14 21 28 35 28.74 28.63 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Face Detection FP16 - Device: CPU b a 2 4 6 8 10 7.02 6.82 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Detection FP16 - Device: CPU b a 0.9765 1.953 2.9295 3.906 4.8825 4.34 4.30 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Detection FP32 - Device: CPU a b 0.9743 1.9486 2.9229 3.8972 4.8715 4.33 4.31 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16 - Device: CPU a b 80 160 240 320 400 380.28 376.70 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Face Detection FP16-INT8 - Device: CPU b a 2 4 6 8 10 8.48 8.46 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16-INT8 - Device: CPU a b 150 300 450 600 750 715.98 712.18 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16 - Device: CPU a b 140 280 420 560 700 661.93 653.16 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Machine Translation EN To DE FP16 - Device: CPU a b 14 28 42 56 70 63.47 62.04 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16-INT8 - Device: CPU b a 200 400 600 800 1000 844.82 843.69 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Person Vehicle Bike Detection FP16 - Device: CPU b a 150 300 450 600 750 692.14 686.15 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b 5K 10K 15K 20K 25K 21060.95 20837.59 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b 5K 10K 15K 20K 25K 22553.27 22417.03 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
Unvanquished Unvanquished is a modern fork of the Tremulous first person shooter. Unvanquished is powered by the Daemon engine, a combination of the ioquake3 (id Tech 3) engine with the graphically-beautiful XreaL engine. Unvanquished supports a modern OpenGL 3 renderer and other advanced graphics features for this open-source, cross-platform shooter game. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Unvanquished 0.53 Resolution: 1920 x 1080 - Effects Quality: High b a 60 120 180 240 300 263.3 257.0
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K a b 2 4 6 8 10 7.91 7.88 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K a b 6 12 18 24 30 25.31 24.61 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K b a 3 6 9 12 15 13.57 13.54 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K b a 7 14 21 28 35 32.16 31.97 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K b a 9 18 27 36 45 40.93 40.37 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K b a 9 18 27 36 45 40.76 40.29 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p b a 0.2183 0.4366 0.6549 0.8732 1.0915 0.97 0.96 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p b a 4 8 12 16 20 14.23 14.13 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p b a 10 20 30 40 50 43.34 43.08 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p a b 8 16 24 32 40 34.60 34.15 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p b a 14 28 42 56 70 62.08 60.72 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p b a 16 32 48 64 80 73.28 72.59 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p b a 16 32 48 64 80 73.76 72.52 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
SVT-AV1 This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 4 - Input: Bosphorus 4K b a 0.7922 1.5844 2.3766 3.1688 3.961 3.521 3.500 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 8 - Input: Bosphorus 4K a b 13 26 39 52 65 56.69 56.49 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 12 - Input: Bosphorus 4K a b 30 60 90 120 150 147.34 129.52 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 13 - Input: Bosphorus 4K b a 30 60 90 120 150 148.19 145.75 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 4 - Input: Bosphorus 1080p a b 2 4 6 8 10 8.046 7.877 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 8 - Input: Bosphorus 1080p b a 30 60 90 120 150 113.86 111.93 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 12 - Input: Bosphorus 1080p b a 90 180 270 360 450 403.43 400.81 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.4 Encoder Mode: Preset 13 - Input: Bosphorus 1080p a b 80 160 240 320 400 359.56 359.18 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Xmrig Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmlrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Monero - Hash Count: 1M a b 3K 6K 9K 12K 15K 15342.8 15334.8 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.18.1 Variant: Wownero - Hash Count: 1M a b 5K 10K 15K 20K 25K 21153.3 21018.6 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
TensorFlow This is a benchmark of the TensorFlow deep learning framework using the TensorFlow reference benchmarks (tensorflow/benchmarks with tf_cnn_benchmarks.py). Note with the Phoronix Test Suite there is also pts/tensorflow-lite for benchmarking the TensorFlow Lite binaries too. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org images/sec, More Is Better TensorFlow 2.10 Device: CPU - Batch Size: 16 - Model: AlexNet b a 14 28 42 56 70 63.15 62.47
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Rotate b a 140 280 420 560 700 649 638 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Sharpen b a 80 160 240 320 400 376 376 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Enhanced b a 120 240 360 480 600 570 568 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Resizing b a 500 1000 1500 2000 2500 2132 2129 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Noise-Gaussian b a 120 240 360 480 600 535 514 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: HWB Color Space b a 200 400 600 800 1000 1089 1043 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
JPEG XL libjxl The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 80 b a 3 6 9 12 15 9.02 8.95 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 100 b a 0.153 0.306 0.459 0.612 0.765 0.68 0.68 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 100 b a 0.1485 0.297 0.4455 0.594 0.7425 0.66 0.66 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
JPEG XL Decoding libjxl The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding libjxl 0.7 CPU Threads: 1 b a 10 20 30 40 50 45.67 44.60
WebP2 Image Encode This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Default b a 3 6 9 12 15 10.27 9.60 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
Facebook RocksDB OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Fill a b 200K 400K 600K 800K 1000K 811216 800273 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read While Writing a b 1000K 2000K 3000K 4000K 5000K 4506140 4452658 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read Random Write Random a b 600K 1200K 1800K 2400K 3000K 2683538 2589021 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
Dragonflydb Dragonfly is an open-source database server that is a "modern Redis replacement" that aims to be the fastest memory store while being compliant with the Redis and Memcached protocols. For benchmarking Dragonfly, Memtier_benchmark is used as a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 1:1 a b 700K 1400K 2100K 2800K 3500K 3341824.71 3320752.26 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 1:5 a b 800K 1600K 2400K 3200K 4000K 3519377.56 3515736.22 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 50 - Set To Get Ratio: 5:1 a b 700K 1400K 2100K 2800K 3500K 3193072.56 3164997.08 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 1:1 b a 700K 1400K 2100K 2800K 3500K 3388656.73 3388128.93 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 1:5 a b 800K 1600K 2400K 3200K 4000K 3609156.11 3593257.69 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Dragonflydb 0.6 Clients: 200 - Set To Get Ratio: 5:1 a b 700K 1400K 2100K 2800K 3500K 3273370.07 3263145.89 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
ClickHouse ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, First Run / Cold Cache b a 40 80 120 160 200 201.86 196.74 MIN: 17.35 / MAX: 5454.55 MIN: 17.5 / MAX: 3529.41 1. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Second Run a b 50 100 150 200 250 236.78 234.64 MIN: 16.92 / MAX: 12000 MIN: 17.06 / MAX: 12000 1. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Third Run a b 50 100 150 200 250 240.17 235.82 MIN: 17.07 / MAX: 6666.67 MIN: 17.11 / MAX: 3750 1. ClickHouse server version 22.5.4.19 (official build).
Stargate Digital Audio Workstation Stargate is an open-source, cross-platform digital audio workstation (DAW) software package with "a unique and carefully curated experience" with scalability from old systems up through modern multi-core systems. Stargate is GPLv3 licensed and makes use of Qt5 (PyQt5) for its user-interface. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 44100 - Buffer Size: 512 a b 0.8759 1.7518 2.6277 3.5036 4.3795 3.892921 2.678564 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 96000 - Buffer Size: 512 b a 0.4382 0.8764 1.3146 1.7528 2.191 1.947376 1.911816 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 192000 - Buffer Size: 512 b a 0.2958 0.5916 0.8874 1.1832 1.479 1.314658 1.244804 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 44100 - Buffer Size: 1024 b a 1.1941 2.3882 3.5823 4.7764 5.9705 5.307245 5.223813 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 480000 - Buffer Size: 512 a b 0.9275 1.855 2.7825 3.71 4.6375 4.122029 2.722708 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 96000 - Buffer Size: 1024 b a 0.8652 1.7304 2.5956 3.4608 4.326 3.845343 3.681535 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 192000 - Buffer Size: 1024 a b 0.5483 1.0966 1.6449 2.1932 2.7415 2.436915 2.268748 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.org Render Ratio, More Is Better Stargate Digital Audio Workstation 22.11.5 Sample Rate: 480000 - Buffer Size: 1024 b a 1.1693 2.3386 3.5079 4.6772 5.8465 5.196953 4.917282 1. (CXX) g++ options: -lpthread -lsndfile -lm -O3 -march=native -ffast-math -funroll-loops -fstrength-reduce -fstrict-aliasing -finline-functions
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 50 a b 400K 800K 1200K 1600K 2000K 1746939.25 1590308.75 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: GET - Parallel Connections: 500 a b 500K 1000K 1500K 2000K 2500K 2271735.25 2214977.25 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
OpenBenchmarking.org Requests Per Second, More Is Better Redis 7.0.4 Test: SET - Parallel Connections: 500 b a 400K 800K 1200K 1600K 2000K 1797429.5 1784408.0 1. (CXX) g++ options: -MM -MT -g3 -fvisibility=hidden -O3
nginx This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.
Connections: 1
a: The test quit with a non-zero exit status.
b: The test quit with a non-zero exit status.
Connections: 20
a: The test quit with a non-zero exit status.
b: The test quit with a non-zero exit status.
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 100 a b 30K 60K 90K 120K 150K 148017.08 145523.99 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 200 a b 30K 60K 90K 120K 150K 148446.23 148101.85 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 500 b a 30K 60K 90K 120K 150K 137072.73 133262.41 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 1000 b a 30K 60K 90K 120K 150K 126633.00 125263.22 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
Connections: 4000
a: The test quit with a non-zero exit status.
b: The test quit with a non-zero exit status.
srsRAN srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Samples / Second, More Is Better srsRAN 22.04.1 Test: OFDM_Test b a 30M 60M 90M 120M 150M 144000000 140100000 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
spaCy The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org tokens/sec, More Is Better spaCy 3.4.1 Model: en_core_web_lg a b 3K 6K 9K 12K 15K 12454 12319
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Only a b 140K 280K 420K 560K 700K 653213 602288 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 250 - Mode: Read Only a b 140K 280K 420K 560K 700K 646832 618774 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Write b a 6K 12K 18K 24K 30K 29318 29212 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Write a b 9K 18K 27K 36K 45K 40535 33658 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org TPS, More Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 250 - Mode: Read Write a b 10K 20K 30K 40K 50K 45407 25548 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
srsRAN srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM a b 30 60 90 120 150 150.6 150.1 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM a b 40 80 120 160 200 162.9 162.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM b a 40 80 120 160 200 160.7 160.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM a b 40 80 120 160 200 175.5 170.0 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM b a 13 26 39 52 65 60.2 60.2 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
BRL-CAD BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.32.6 VGR Performance Metric a b 90K 180K 270K 360K 450K 405453 403757 1. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -ldl -lm
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU a b 0.3472 0.6944 1.0416 1.3888 1.736 1.50644 1.54322 MIN: 1.38 MIN: 1.37 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU a b 1.3319 2.6638 3.9957 5.3276 6.6595 5.89944 5.91961 MIN: 5.84 MIN: 5.83 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU a b 0.3309 0.6618 0.9927 1.3236 1.6545 1.43493 1.47047 MIN: 1.26 MIN: 1.24 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU a b 0.175 0.35 0.525 0.7 0.875 0.652318 0.777800 MIN: 0.56 MIN: 0.6 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU
a: The test run did not produce a result.
b: The test run did not produce a result.
Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU
a: The test run did not produce a result.
b: The test run did not produce a result.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU a b 3 6 9 12 15 9.02473 9.42618 MIN: 8.88 MIN: 8.8 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU a b 2 4 6 8 10 6.06539 6.16553 MIN: 3.72 MIN: 3.99 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU a b 0.6068 1.2136 1.8204 2.4272 3.034 2.63187 2.69706 MIN: 2.56 MIN: 2.55 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU b a 3 6 9 12 15 9.15733 9.16751 MIN: 8.99 MIN: 9.09 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU a b 0.3921 0.7842 1.1763 1.5684 1.9605 1.68952 1.74252 MIN: 1.63 MIN: 1.63 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU a b 0.4722 0.9444 1.4166 1.8888 2.361 2.06629 2.09863 MIN: 1.98 MIN: 1.97 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU a b 600 1200 1800 2400 3000 2614.46 2759.20 MIN: 2598.48 MIN: 2717.67 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU a b 300 600 900 1200 1500 1295.49 1451.46 MIN: 1280.49 MIN: 1416.39 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU a b 600 1200 1800 2400 3000 2559.09 2719.77 MIN: 2544.08 MIN: 2677.09 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU
a: The test run did not produce a result.
b: The test run did not produce a result.
Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU
a: The test run did not produce a result.
b: The test run did not produce a result.
Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU
a: The test run did not produce a result.
b: The test run did not produce a result.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU a b 300 600 900 1200 1500 1366.14 1446.12 MIN: 1343.04 MIN: 1407.49 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU b a 1.009 2.018 3.027 4.036 5.045 4.27034 4.48456 MIN: 4.03 MIN: 4.29 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU a b 600 1200 1800 2400 3000 2604.63 2772.01 MIN: 2588.48 MIN: 2730.23 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU a b 300 600 900 1200 1500 1361.72 1441.78 MIN: 1338.9 MIN: 1407.45 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.0 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU b a 2 4 6 8 10 4.74470 6.54191 MIN: 4.47 MIN: 6.16 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU
a: The test run did not produce a result.
b: The test run did not produce a result.
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Only - Average Latency a b 0.0374 0.0748 0.1122 0.1496 0.187 0.153 0.166 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 250 - Mode: Read Only - Average Latency a b 0.0909 0.1818 0.2727 0.3636 0.4545 0.386 0.404 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 50 - Mode: Read Write - Average Latency b a 0.3852 0.7704 1.1556 1.5408 1.926 1.705 1.712 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 100 - Mode: Read Write - Average Latency a b 0.6685 1.337 2.0055 2.674 3.3425 2.467 2.971 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
OpenBenchmarking.org ms, Fewer Is Better PostgreSQL 15 Scaling Factor: 100 - Clients: 250 - Mode: Read Write - Average Latency a b 3 6 9 12 15 5.506 9.786 1. (CC) gcc options: -fno-strict-aliasing -fwrapv -O2 -lpgcommon -lpgport -lpq -lm
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: nasnet a b 4 8 12 16 20 14.31 14.96 MIN: 14.19 / MAX: 14.67 MIN: 14.82 / MAX: 15.24 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenetV3 a b 0.5524 1.1048 1.6572 2.2096 2.762 2.282 2.455 MIN: 2.24 / MAX: 2.33 MIN: 2.41 / MAX: 2.64 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: squeezenetv1.1 b a 0.97 1.94 2.91 3.88 4.85 4.191 4.311 MIN: 4.14 / MAX: 4.28 MIN: 4.26 / MAX: 4.37 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: resnet-v2-50 a b 5 10 15 20 25 21.56 22.83 MIN: 21.35 / MAX: 21.84 MIN: 22.67 / MAX: 23.06 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: SqueezeNetV1.0 a b 2 4 6 8 10 6.491 6.547 MIN: 6.43 / MAX: 6.55 MIN: 6.49 / MAX: 6.68 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: MobileNetV2_224 a b 1.0366 2.0732 3.1098 4.1464 5.183 4.361 4.607 MIN: 4.33 / MAX: 4.48 MIN: 4.57 / MAX: 4.8 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 b a 0.6872 1.3744 2.0616 2.7488 3.436 3.021 3.054 MIN: 2.99 / MAX: 3.17 MIN: 3.02 / MAX: 3.11 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: inception-v3 a b 6 12 18 24 30 23.25 25.51 MIN: 23.03 / MAX: 23.59 MIN: 25.34 / MAX: 25.86 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v2-v2 - Model: mobilenet-v2 a b 2 4 6 8 10 6.92 7.13 MIN: 6.79 / MAX: 7.59 MIN: 6.77 / MAX: 8.08 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v3-v3 - Model: mobilenet-v3 a b 2 4 6 8 10 6.44 6.62 MIN: 6.34 / MAX: 7.1 MIN: 6.24 / MAX: 7.88 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: shufflenet-v2 a b 2 4 6 8 10 7.89 8.14 MIN: 7.72 / MAX: 8.67 MIN: 7.68 / MAX: 16.58 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mnasnet a b 2 4 6 8 10 6.29 6.54 MIN: 6.17 / MAX: 6.88 MIN: 6.2 / MAX: 7.42 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: efficientnet-b0 a b 3 6 9 12 15 9.26 9.56 MIN: 9.16 / MAX: 9.93 MIN: 9.17 / MAX: 10.47 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: blazeface a b 0.8438 1.6876 2.5314 3.3752 4.219 3.67 3.75 MIN: 3.59 / MAX: 4.3 MIN: 3.54 / MAX: 4.29 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: googlenet a b 5 10 15 20 25 18.44 19.69 MIN: 17.65 / MAX: 19.84 MIN: 18.17 / MAX: 25.94 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vgg16 a b 8 16 24 32 40 33.71 34.14 MIN: 32.75 / MAX: 34.65 MIN: 32.66 / MAX: 42.06 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet18 a b 3 6 9 12 15 12.14 12.78 MIN: 11.54 / MAX: 13.22 MIN: 11.63 / MAX: 14.62 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: alexnet a b 3 6 9 12 15 8.95 9.45 MIN: 8.42 / MAX: 10.23 MIN: 8.49 / MAX: 10.69 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet50 a b 6 12 18 24 30 21.43 23.05 MIN: 20.84 / MAX: 22.74 MIN: 21.55 / MAX: 58.12 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: yolov4-tiny a b 6 12 18 24 30 25.55 26.29 MIN: 24.49 / MAX: 30.39 MIN: 24.78 / MAX: 29.98 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: squeezenet_ssd a b 5 10 15 20 25 21.24 22.44 MIN: 19.92 / MAX: 23.46 MIN: 20.15 / MAX: 54.28 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: regnety_400m a b 6 12 18 24 30 26.61 26.65 MIN: 25.84 / MAX: 27.71 MIN: 26.08 / MAX: 28.34 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vision_transformer a b 30 60 90 120 150 133.48 137.59 MIN: 133.04 / MAX: 138.19 MIN: 134.63 / MAX: 191.94 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: FastestDet b a 2 4 6 8 10 8.90 8.91 MIN: 8.58 / MAX: 9.7 MIN: 8.78 / MAX: 10.28 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Face Detection FP16 - Device: CPU b a 400 800 1200 1600 2000 1690.53 1735.28 MIN: 1466.54 / MAX: 1987.53 MIN: 1511.37 / MAX: 1909.23 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Person Detection FP16 - Device: CPU b a 600 1200 1800 2400 3000 2714.24 2734.68 MIN: 1614.04 / MAX: 3194.85 MIN: 1593.76 / MAX: 3225.01 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Person Detection FP32 - Device: CPU b a 600 1200 1800 2400 3000 2734.21 2750.06 MIN: 2202.17 / MAX: 3150.66 MIN: 2288.66 / MAX: 3108.15 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16 - Device: CPU a b 7 14 21 28 35 31.53 31.82 MIN: 16.43 / MAX: 53.16 MIN: 20.13 / MAX: 53.75 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Face Detection FP16-INT8 - Device: CPU b a 300 600 900 1200 1500 1408.60 1410.67 MIN: 1373.05 / MAX: 1468.03 MIN: 1372.29 / MAX: 1527.17 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Vehicle Detection FP16-INT8 - Device: CPU a b 4 8 12 16 20 16.75 16.84 MIN: 11.9 / MAX: 43.39 MIN: 10.79 / MAX: 39.98 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16 - Device: CPU a b 5 10 15 20 25 18.11 18.36 MIN: 10.34 / MAX: 38.17 MIN: 15.89 / MAX: 37.21 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Machine Translation EN To DE FP16 - Device: CPU a b 40 80 120 160 200 188.92 193.23 MIN: 155.98 / MAX: 237.74 MIN: 153.71 / MAX: 246.32 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Weld Porosity Detection FP16-INT8 - Device: CPU b a 7 14 21 28 35 28.40 28.43 MIN: 14.44 / MAX: 59.77 MIN: 14.41 / MAX: 59.57 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Person Vehicle Bike Detection FP16 - Device: CPU b a 4 8 12 16 20 17.32 17.47 MIN: 10.28 / MAX: 33.98 MIN: 14.95 / MAX: 28.95 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b 0.2565 0.513 0.7695 1.026 1.2825 1.13 1.14 MIN: 0.62 / MAX: 11.19 MIN: 0.63 / MAX: 12.31 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.3 Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b 0.2385 0.477 0.7155 0.954 1.1925 1.06 1.06 MIN: 0.6 / MAX: 11.35 MIN: 0.6 / MAX: 12.17 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared
OpenFOAM OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: motorBike - Mesh Time a b 9 18 27 36 45 40.37 40.59 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: motorBike - Execution Time a b 16 32 48 64 80 71.60 71.61 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time a b 6 12 18 24 30 26.87 27.28 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Execution Time a b 30 60 90 120 150 121.46 123.71 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfoamToVTK -ldynamicMesh -llagrangian -lgenericPatchFields -lfileFormats -lOpenFOAM -ldl -lm
OpenRadioss OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bumper Beam a b 20 40 60 80 100 98.86 100.60
Timed Wasmer Compilation This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Timed Wasmer Compilation 2.3 Time To Compile a b 11 22 33 44 55 46.86 46.96 1. (CC) gcc options: -m64 -ldl -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs
FFmpeg This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Live b a 6 12 18 24 30 25.19 25.43 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Live a b 16 32 48 64 80 72.95 73.03 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Upload a b 50 100 150 200 250 204.11 205.24 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Upload b a 40 80 120 160 200 181.78 183.91 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Platform a b 40 80 120 160 200 160.71 161.32 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Platform b a 60 120 180 240 300 264.16 264.66 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Video On Demand a b 40 80 120 160 200 160.60 161.04 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Video On Demand b a 60 120 180 240 300 263.53 264.63 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Apache Spark This is a benchmark of Apache Spark with its PySpark interface. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmars the Apache Spark in a single-system configuration using spark-submit. The test makes use of DIYBigData's pyspark-benchmark (https://github.com/DIYBigData/pyspark-benchmark/) for generating of test data and various Apache Spark operations. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Apache Spark 3.3 Row Count: 1000000 - Partitions: 100 - SHA-512 Benchmark Time a b 0.8097 1.6194 2.4291 3.2388 4.0485 3.40000000 3.59872799
Blender Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.4 Blend File: BMW27 - Compute: CPU-Only a b 12 24 36 48 60 53.91 53.95
Numenta Anomaly Benchmark Numenta Anomaly Benchmark (NAB) is a benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial time-series data files plus a novel scoring mechanism designed for real-time applications. This test profile currently measures the time to run various detectors. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Numenta Anomaly Benchmark 1.1 Detector: KNN CAD a b 30 60 90 120 150 129.15 133.48
EnCodec EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 3 kbps a b 9 18 27 36 45 38.19 38.43
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8301025Graphics Notes: BAR1 / Visible vRAM Size: 256 MB - vBIOS Version: xxx-xxx-xxxJava Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 26 December 2022 13:21 by user phoronix.
b Processor: AMD Ryzen Threadripper 3960X 24-Core @ 3.80GHz (24 Cores / 48 Threads), Motherboard: MSI Creator TRX40 (MS-7C59) v1.0 (1.12N1 BIOS), Chipset: AMD Starship/Matisse, Memory: 32GB, Disk: 1000GB Sabrent Rocket 4.0 1TB, Graphics: Gigabyte AMD Radeon RX 5500 XT 8GB (1900/875MHz), Audio: AMD Navi 10 HDMI Audio, Monitor: VA2431, Network: Aquantia AQC107 NBase-T/IEEE + Intel I211 + Intel Wi-Fi 6 AX200
OS: Ubuntu 22.04, Kernel: 5.19.0-051900rc7-generic (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server, OpenGL: 4.6 Mesa 22.0.1 (LLVM 13.0.1 DRM 3.47), Vulkan: 1.3.204, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0x8301025Graphics Notes: BAR1 / Visible vRAM Size: 256 MB - vBIOS Version: xxx-xxx-xxxJava Notes: OpenJDK Runtime Environment (build 11.0.15+10-Ubuntu-0ubuntu0.22.04.1)Python Notes: Python 3.10.4Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 26 December 2022 19:38 by user phoronix.