2 x AMD EPYC 7601 32-Core testing with a Dell 02MJ3T (1.2.5 BIOS) and Matrox G200eW3 on Ubuntu 22.04 via the Phoronix Test Suite.
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Notes: CPU Microcode: 0x8001250Python Notes: Python 3.10.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected
b Processor: 2 x AMD EPYC 7601 32-Core (64 Cores / 128 Threads), Motherboard: Dell 02MJ3T (1.2.5 BIOS), Chipset: AMD 17h, Memory: 512GB, Disk: 280GB INTEL SSDPED1D280GA + 12 x 500GB Samsung SSD 860 + 120GB INTEL SSDSCKJB120G7R, Graphics: Matrox G200eW3, Monitor: VE228, Network: 2 x Broadcom BCM57416 NetXtreme-E Dual-Media 10G RDMA + 2 x Broadcom NetXtreme BCM5720 PCIe
OS: Ubuntu 22.04, Kernel: 5.15.0-40-generic (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server 1.21.1.3, Vulkan: 1.2.204, Compiler: GCC 11.3.0, File-System: ext4, Screen Resolution: 1600x1200
zen 1 epyc OpenBenchmarking.org Phoronix Test Suite 2 x AMD EPYC 7601 32-Core (64 Cores / 128 Threads) Dell 02MJ3T (1.2.5 BIOS) AMD 17h 512GB 280GB INTEL SSDPED1D280GA + 12 x 500GB Samsung SSD 860 + 120GB INTEL SSDSCKJB120G7R Matrox G200eW3 VE228 2 x Broadcom BCM57416 NetXtreme-E Dual-Media 10G RDMA + 2 x Broadcom NetXtreme BCM5720 PCIe Ubuntu 22.04 5.15.0-40-generic (x86_64) GNOME Shell 42.2 X Server 1.21.1.3 1.2.204 GCC 11.3.0 ext4 1600x1200 Processor Motherboard Chipset Memory Disk Graphics Monitor Network OS Kernel Desktop Display Server Vulkan Compiler File-System Screen Resolution Zen 1 Epyc Benchmarks System Logs - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096 - CPU Microcode: 0x8001250 - Python 3.10.6 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling + srbds: Not affected + tsx_async_abort: Not affected
a vs. b Comparison Phoronix Test Suite Baseline +15.3% +15.3% +30.6% +30.6% +45.9% +45.9% +61.2% +61.2% 61.2% 54.7% 48.8% 46.3% 37.8% 36.7% 30.4% 28.5% 23.1% 22.5% 22.1% 13.4% 12.6% 12% 11.1% 10.6% 10.2% 10% 8% 8% 7.9% 7.1% 7% 6.9% 6.8% 6.4% 6.3% 5.1% 4.9% 4.7% 4.6% 4.4% 4% 3.9% 3.9% 3.5% 3% 2.9% 2.6% 2.6% 2.5% 2.5% 2.5% 2.4% 2.3% CPU - resnet50 CPU-v2-v2 - mobilenet-v2 CPU - FastestDet CPU - shufflenet-v2 CPU - regnety_400m CPU - blazeface CPU Cache CPU-v3-v3 - mobilenet-v3 29.5% CPU - resnet18 Resizing 23.7% CPU - mnasnet M.M.B.S.T - f32 - CPU CPU - alexnet Monero - 1M 17.2% CPU - mobilenet 17% HWB Color Space 15.7% Speed 9 Realtime - Bosphorus 1080p 13.6% Preset 10 - Bosphorus 4K CPU - vgg16 Preset 12 - Bosphorus 1080p D.B.s - u8s8f32 - CPU 11.9% Noise-Gaussian 11.8% IP Shapes 1D - u8s8f32 - CPU 1.5 kbps 11% Default CPU - googlenet Atomic R.N.N.T - f32 - CPU 8.7% Speed 8 Realtime - Bosphorus 1080p 8.6% Speed 10 Realtime - Bosphorus 1080p 8.1% Speed 8 Realtime - Bosphorus 4K 8% N.Q.A.B.b.u.S.1.P - S.S.S N.Q.A.B.b.u.S.1.P - S.S.S resnet-v2-50 Read While Writing D.B.s - f32 - CPU CPU - vision_transformer Memory Copying CPU - squeezenet_ssd nasnet 1.R.W.A.D.T.R 6% 1.R.W.A.D.F.R.C.C C.B.S.A - f32 - CPU squeezenetv1.1 Context Switching SqueezeNetV1.0 20k Atoms 4.3% Garlicoin 1.R.W.A.D.S.R mobilenet-v1-1.0 CPU - yolov4-tiny 3.9% Preset 12 - Bosphorus 4K 24 kbps 3.5% CPU - efficientnet-b0 3.3% Speed 6 Realtime - Bosphorus 4K 3.3% MMAP 3.2% R.N.N.T - u8s8f32 - CPU 3.2% M.M.B.S.T - u8s8f32 - CPU Spaceship S.V.M.P 2.8% Speed 6 Two-Pass - Bosphorus 4K 2.8% 3 kbps 2.7% Q.7.C.E.7 2.7% G.C.S.F R.N.N.I - f32 - CPU M.T.E.T.D.F - CPU d.S.M.S - Execution Time 2.5% M.T.E.T.D.F - CPU Speed 6 Two-Pass - Bosphorus 1080p 2.5% Socket Activity Q.1.L.H.C mobilenetV3 en_core_web_trf 2.3% 2.3% CPU - 32 - GoogLeNet 2.3% libx265 - Live 2.1% libx265 - Live 2.1% Speed 4 Two-Pass - Bosphorus 1080p 2% NCNN NCNN NCNN NCNN NCNN NCNN Stress-NG NCNN NCNN GraphicsMagick NCNN oneDNN NCNN Xmrig NCNN GraphicsMagick AOM AV1 SVT-AV1 NCNN SVT-AV1 oneDNN GraphicsMagick oneDNN EnCodec WebP2 Image Encode NCNN Stress-NG oneDNN AOM AV1 AOM AV1 AOM AV1 Neural Magic DeepSparse Neural Magic DeepSparse Mobile Neural Network Facebook RocksDB oneDNN NCNN Stress-NG NCNN Mobile Neural Network ClickHouse ClickHouse oneDNN Mobile Neural Network Stress-NG Mobile Neural Network LAMMPS Molecular Dynamics Simulator Cpuminer-Opt ClickHouse Mobile Neural Network NCNN SVT-AV1 EnCodec NCNN AOM AV1 Stress-NG oneDNN oneDNN Natron Stress-NG AOM AV1 EnCodec WebP2 Image Encode Stress-NG oneDNN OpenVINO OpenFOAM OpenVINO AOM AV1 Stress-NG WebP Image Encode Mobile Neural Network spaCy Aircrack-ng TensorFlow FFmpeg FFmpeg AOM AV1 a b
zen 1 epyc ai-benchmark: Device Inference Score ai-benchmark: Device Training Score ai-benchmark: Device AI Score aircrack-ng: aom-av1: Speed 0 Two-Pass - Bosphorus 4K aom-av1: Speed 4 Two-Pass - Bosphorus 4K aom-av1: Speed 6 Realtime - Bosphorus 4K aom-av1: Speed 6 Two-Pass - Bosphorus 4K aom-av1: Speed 8 Realtime - Bosphorus 4K aom-av1: Speed 9 Realtime - Bosphorus 4K aom-av1: Speed 10 Realtime - Bosphorus 4K aom-av1: Speed 0 Two-Pass - Bosphorus 1080p aom-av1: Speed 4 Two-Pass - Bosphorus 1080p aom-av1: Speed 6 Realtime - Bosphorus 1080p aom-av1: Speed 6 Two-Pass - Bosphorus 1080p aom-av1: Speed 8 Realtime - Bosphorus 1080p aom-av1: Speed 9 Realtime - Bosphorus 1080p aom-av1: Speed 10 Realtime - Bosphorus 1080p astcenc: Fast astcenc: Medium astcenc: Thorough astcenc: Exhaustive blender: BMW27 - CPU-Only blender: Classroom - CPU-Only blender: Fishy Cat - CPU-Only blender: Barbershop - CPU-Only blender: Pabellon Barcelona - CPU-Only brl-cad: VGR Performance Metric blosc: blosclz shuffle blosc: blosclz bitshuffle clickhouse: 100M Rows Web Analytics Dataset, First Run / Cold Cache clickhouse: 100M Rows Web Analytics Dataset, Second Run clickhouse: 100M Rows Web Analytics Dataset, Third Run cpuminer-opt: Magi cpuminer-opt: x25x cpuminer-opt: scrypt cpuminer-opt: Deepcoin cpuminer-opt: Ringcoin cpuminer-opt: Blake-2 S cpuminer-opt: Garlicoin cpuminer-opt: Skeincoin cpuminer-opt: Myriad-Groestl cpuminer-opt: LBC, LBRY Credits cpuminer-opt: Quad SHA-256, Pyrite cpuminer-opt: Triple SHA-256, Onecoin encodec: 3 kbps encodec: 6 kbps encodec: 24 kbps encodec: 1.5 kbps rocksdb: Rand Read rocksdb: Update Rand rocksdb: Read While Writing rocksdb: Read Rand Write Rand ffmpeg: libx264 - Live ffmpeg: libx264 - Live ffmpeg: libx265 - Live ffmpeg: libx265 - Live ffmpeg: libx264 - Upload ffmpeg: libx264 - Upload ffmpeg: libx265 - Upload ffmpeg: libx265 - Upload ffmpeg: libx264 - Platform ffmpeg: libx264 - Platform ffmpeg: libx265 - Platform ffmpeg: libx265 - Platform ffmpeg: libx264 - Video On Demand ffmpeg: libx264 - Video On Demand ffmpeg: libx265 - Video On Demand ffmpeg: libx265 - Video On Demand encode-flac: WAV To FLAC graphics-magick: Swirl graphics-magick: Rotate graphics-magick: Sharpen graphics-magick: Enhanced graphics-magick: Resizing graphics-magick: Noise-Gaussian graphics-magick: HWB Color Space jpegxl-decode: 1 jpegxl-decode: All jpegxl: PNG - 80 jpegxl: PNG - 90 jpegxl: JPEG - 80 jpegxl: JPEG - 90 jpegxl: PNG - 100 jpegxl: JPEG - 100 lammps: 20k Atoms lammps: Rhodopsin Protein avifenc: 0 avifenc: 2 avifenc: 6 avifenc: 6, Lossless avifenc: 10, Lossless minibude: OpenMP - BM1 minibude: OpenMP - BM1 minibude: OpenMP - BM2 minibude: OpenMP - BM2 mnn: nasnet mnn: mobilenetV3 mnn: squeezenetv1.1 mnn: resnet-v2-50 mnn: SqueezeNetV1.0 mnn: MobileNetV2_224 mnn: mobilenet-v1-1.0 mnn: inception-v3 natron: Spaceship ncnn: CPU - mobilenet ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU - shufflenet-v2 ncnn: CPU - mnasnet ncnn: CPU - efficientnet-b0 ncnn: CPU - blazeface ncnn: CPU - googlenet ncnn: CPU - vgg16 ncnn: CPU - resnet18 ncnn: CPU - alexnet ncnn: CPU - resnet50 ncnn: CPU - yolov4-tiny ncnn: CPU - squeezenet_ssd ncnn: CPU - regnety_400m ncnn: CPU - vision_transformer ncnn: CPU - FastestDet nekrs: TurboPipe Periodic deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Synchronous Single-Stream deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection,YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection,YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream nginx: 200 nginx: 500 nginx: 1000 node-web-tooling: onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: IP Shapes 1D - u8s8f32 - CPU onednn: IP Shapes 3D - u8s8f32 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Convolution Batch Shapes Auto - u8s8f32 - CPU onednn: Deconvolution Batch shapes_1d - u8s8f32 - CPU onednn: Deconvolution Batch shapes_3d - u8s8f32 - CPU onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU onednn: Recurrent Neural Network Training - u8s8f32 - CPU onednn: Recurrent Neural Network Inference - u8s8f32 - CPU onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPU onednn: Recurrent Neural Network Training - bf16bf16bf16 - CPU onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPU onednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU openfoam: drivaerFastback, Small Mesh Size - Mesh Time openfoam: drivaerFastback, Small Mesh Size - Execution Time openfoam: drivaerFastback, Medium Mesh Size - Mesh Time openfoam: drivaerFastback, Medium Mesh Size - Execution Time openradioss: Bumper Beam openradioss: Cell Phone Drop Test openradioss: Bird Strike on Windshield openradioss: Rubber O-Ring Seal Installation openradioss: INIVOL and Fluid Structure Interaction Drop Container openvino: Face Detection FP16 - CPU openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU ospray-studio: 1 - 4K - 1 - Path Tracer ospray-studio: 2 - 4K - 1 - Path Tracer ospray-studio: 3 - 4K - 1 - Path Tracer ospray-studio: 1 - 4K - 16 - Path Tracer ospray-studio: 1 - 4K - 32 - Path Tracer ospray-studio: 2 - 4K - 16 - Path Tracer ospray-studio: 2 - 4K - 32 - Path Tracer ospray-studio: 3 - 4K - 16 - Path Tracer ospray-studio: 3 - 4K - 32 - Path Tracer ospray-studio: 1 - 1080p - 1 - Path Tracer ospray-studio: 2 - 1080p - 1 - Path Tracer ospray-studio: 3 - 1080p - 1 - Path Tracer ospray-studio: 1 - 1080p - 16 - Path Tracer ospray-studio: 1 - 1080p - 32 - Path Tracer ospray-studio: 2 - 1080p - 16 - Path Tracer ospray-studio: 2 - 1080p - 32 - Path Tracer ospray-studio: 3 - 1080p - 16 - Path Tracer ospray-studio: 3 - 1080p - 32 - Path Tracer primesieve: 1e12 primesieve: 1e13 smhasher: wyhash smhasher: wyhash smhasher: SHA3-256 smhasher: SHA3-256 smhasher: Spooky32 smhasher: Spooky32 smhasher: fasthash32 smhasher: fasthash32 smhasher: FarmHash128 smhasher: FarmHash128 smhasher: t1ha2_atonce smhasher: t1ha2_atonce smhasher: FarmHash32 x86_64 AVX smhasher: FarmHash32 x86_64 AVX smhasher: t1ha0_aes_avx2 x86_64 smhasher: t1ha0_aes_avx2 x86_64 smhasher: MeowHash x86_64 AES-NI smhasher: MeowHash x86_64 AES-NI spacy: en_core_web_lg spacy: en_core_web_trf srsran: OFDM_Test srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAM srsran: 4G PHY_DL_Test 100 PRB MIMO 64-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 64-QAM srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAM srsran: 4G PHY_DL_Test 100 PRB MIMO 256-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAM srsran: 4G PHY_DL_Test 100 PRB SISO 256-QAM srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM srsran: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM stress-ng: MMAP stress-ng: NUMA stress-ng: Futex stress-ng: MEMFD stress-ng: Mutex stress-ng: Atomic stress-ng: Crypto stress-ng: Malloc stress-ng: Forking stress-ng: IO_uring stress-ng: SENDFILE stress-ng: CPU Cache stress-ng: CPU Stress stress-ng: Semaphores stress-ng: Matrix Math stress-ng: Vector Math stress-ng: Memory Copying stress-ng: Socket Activity stress-ng: Context Switching stress-ng: Glibc C String Functions stress-ng: Glibc Qsort Data Sorting stress-ng: System V Message Passing svt-av1: Preset 4 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 10 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 4 - Bosphorus 1080p svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 10 - Bosphorus 1080p svt-av1: Preset 12 - Bosphorus 1080p tensorflow: CPU - 16 - AlexNet tensorflow: CPU - 32 - AlexNet tensorflow: CPU - 64 - AlexNet tensorflow: CPU - 256 - AlexNet tensorflow: CPU - 512 - AlexNet tensorflow: CPU - 16 - GoogLeNet tensorflow: CPU - 16 - ResNet-50 tensorflow: CPU - 32 - GoogLeNet tensorflow: CPU - 32 - ResNet-50 tensorflow: CPU - 64 - GoogLeNet tensorflow: CPU - 64 - ResNet-50 tensorflow: CPU - 256 - GoogLeNet tensorflow: CPU - 256 - ResNet-50 tensorflow: CPU - 512 - GoogLeNet tensorflow: CPU - 512 - ResNet-50 build-python: Default build-python: Released Build, PGO + LTO Optimized build-erlang: Time To Compile build-nodejs: Time To Compile build-php: Time To Compile build-wasmer: Time To Compile unpack-linux: linux-5.19.tar.xz webp: Default webp: Quality 100 webp: Quality 100, Lossless webp: Quality 100, Highest Compression webp: Quality 100, Lossless, Highest Compression webp2: Default webp2: Quality 75, Compression Effort 7 webp2: Quality 95, Compression Effort 7 webp2: Quality 100, Compression Effort 5 webp2: Quality 100, Lossless Compression xmrig: Monero - 1M xmrig: Wownero - 1M y-cruncher: 1B y-cruncher: 500M a b 1245 764 2009 76103.367 0.21 5.59 16.41 8.95 18.51 21.01 21.95 0.6 11.06 33.12 24.96 41.79 48.34 45.43 382.0137 151.8067 19.4177 2.2087 35.5 87.25 51.46 409.74 126.81 413004 8493.2 5328.2 241.18 260.83 272.09 1711.45 1458.5 496.52 27510 18480 1098730 13970 222070 154070 56960 374460 718010 66.471 74.538 78.457 67.588 198794977 199394 5807440 1551427 38.85 129.99 86.74 58.22 314.58 8.03 241.646159513 10.45 249.00936878 30.42 356.59243127 21.24 250.83 30.20 357.150956529 21.21 28.868 1243 443 561 743 167 692 1107 26.86 151.09 6.37 6.27 6.1 5.91 0.4 0.4 24.029 24.104 136.608 78.895 5.268 9.994 7.613 696.801 27.872 753.17 30.127 42.874 5.475 10.633 50.815 15.675 9.862 7.677 56.231 3.5 70.65 74.59 45.3 64 51.66 55.09 42.63 111.2 139.33 81.24 65.6 178.31 75.13 100.38 286.78 319.23 76.99 184723000000 22.6007 1387.0925 8.3625 119.5632 87.6373 362.7795 17.7786 56.2277 121.842 261.4175 39.942 25.014 249.2856 128.0038 73.5683 13.5743 192.2873 166.1119 51.2365 19.4996 96.0986 331.234 26.3211 37.9743 22.6456 1387.2953 8.1225 123.0974 100112.06 99535.87 96265.52 7.13 5.12686 20.1433 4.23625 4.52585 21.0326 19.1966 6.0371 28.7725 4.14024 2.51501 7823 5300.36 7893.57 5167.4 9.48918 8045.7 5157.6 29.7727 34.098937 61.454018 167.0036 541.95263 108.51 41.65 185.26 118 210.39 5.85 5452.5 3.9 8087.11 3.85 8245.26 352.62 90.64 7.61 4182.46 614.56 52.03 626.61 51.03 59.76 533.65 751.84 85.05 513.24 62.29 16068.58 3.94 19469.8 3.26 4233 4416 5127 78125 145358 80769 150980 91958 173492 1062 1096 1282 17188 34160 17838 35696 20597 41292 4.638 56.028 30087.23 20.507 171.09 2294.234 17912.16 40.875 8316.13 29.558 19050.24 51.419 18815.25 27.774 32899.32 34.619 77196.31 27.314 41424.16 47.356 6798 701 65700000 252.8 86.2 257.5 93.7 274.2 92 279.8 98.9 69.7 38 2418.27 140.8 802471.3 1751.42 17481256.34 156802.93 67640.35 146984386.17 19768.12 45731.6 678380.72 199.02 106647.2 7247266.69 215006.68 266411.19 2328.14 25316.76 14423511.17 5250106.25 673.06 7616183.78 1.314 27.818 46.683 67.124 3.531 76.278 161.41 209.488 42.69 48.66 59.55 69.6 69.82 24.77 8.13 29.09 9.6 30.13 10.11 30.11 10.92 30.61 11.24 22.619 399.66 124.066 217.334 62.123 73.78 10.559 12.25 7.73 1.06 2.43 0.41 5.77 0.38 0.20 7.42 0.04 12797.7 19243.7 21.577 11.184 1232 760 1992 74378.891 0.21 5.56 15.89 8.71 17.14 21.39 21.83 0.59 10.84 33.44 24.36 38.47 42.57 42.01 376.393 152.2417 19.4423 2.2093 35.62 87.77 51.66 408.93 126.75 406201 8534.5 5318.7 253.55 271.13 256.64 1702.45 1452.6 499.97 27490 18440 1098280 14530 222040 152420 56960 374430 718140 68.291 74.885 81.167 75.036 196826975 198198 6220606 1558868 38.76 130.30 88.58 57.01 315.290203876 8.01 240.519283432 10.50 250.99 30.18 356.16 21.27 250.37 30.25 358.61 21.12 28.744 1247 448 555 740 135 619 957 27.01 151.66 6.38 6.29 6.06 5.94 0.4 0.4 23.033 24.069 136.591 79.112 5.268 10.085 7.653 687.884 27.515 742.296 29.692 40.336 5.35 10.152 47.115 15.008 9.957 7.386 56.096 3.6 82.64 48.21 58.67 43.74 41.96 56.92 31.18 100.95 123.73 63.2 53.71 110.59 78.03 94.37 208.14 298.72 51.75 184827000000 22.568 1398.9258 8.2919 120.5776 87.3202 364.9851 19.1958 52.0728 121.8565 261.4233 40.071 24.9281 250.8978 127.2104 73.866 13.5155 192.6373 165.7009 51.6311 19.3504 96.5473 330.5267 26.046 38.3752 22.674 1390.6109 8.2228 121.5902 101709.29 98639.62 95346.38 7.02 5.03993 20.2831 3.81237 4.58464 20.0581 19.1791 5.6399 28.7918 4.63109 2.51294 8504.29 5167.81 8148.11 5128.18 7.74858 8013.88 5207.64 28.8968 34.188105 62.974805 167.06876 542.44501 108.31 41.58 186.71 117.84 207.85 5.82 5492 3.89 8150.2 3.89 8119.63 351.6 90.91 7.61 4185.2 613.31 52.14 626.42 51.04 61.27 520.79 751.3 85.09 508.7 62.84 16149.43 3.92 19580.12 3.24 4239 4403 5130 78264 145660 80340 151192 92675 173590 1060 1100 1277 17139 34140 17792 35397 20517 41490 4.66 56.246 30062.54 20.508 171.13 2294.164 17951.6 40.792 8317.14 29.586 19028.89 51.419 18809.49 27.774 32884.37 34.546 76443.61 27.314 41393.48 47.348 6795 685 66100000 252.3 85.9 256.9 93.6 273 91.5 279.4 99 69.7 38.1 2342.6 140.65 796565.52 1743.95 17702491.79 172559.2 67622.54 148029623.74 19540.99 45863.96 678714.22 259.54 107074.79 7261328.18 215110.79 266535.54 2485.61 25937.03 15090386.73 5386266.71 667.4 7408445.43 1.321 28.06 52.96 69.476 3.561 76.731 159.55 234.707 42.91 48.45 59.76 69.33 69.69 24.47 8.12 28.44 9.69 30.12 10.17 29.87 10.86 30.42 11.32 22.707 400.368 124.216 215.92 62.178 73.485 10.649 12.23 7.71 1.05 2.43 0.42 6.38 0.37 0.20 7.35 0.04 10914.9 19140.2 21.561 10.987 OpenBenchmarking.org
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K a b 1.2578 2.5156 3.7734 5.0312 6.289 5.59 5.56 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K a b 4 8 12 16 20 16.41 15.89 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K a b 3 6 9 12 15 8.95 8.71 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K a b 5 10 15 20 25 18.51 17.14 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K a b 5 10 15 20 25 21.01 21.39 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K a b 5 10 15 20 25 21.95 21.83 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p a b 0.135 0.27 0.405 0.54 0.675 0.60 0.59 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p a b 3 6 9 12 15 11.06 10.84 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p a b 8 16 24 32 40 33.12 33.44 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p a b 6 12 18 24 30 24.96 24.36 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p a b 10 20 30 40 50 41.79 38.47 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p a b 11 22 33 44 55 48.34 42.57 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
OpenBenchmarking.org Frames Per Second, More Is Better AOM AV1 3.5 Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p a b 10 20 30 40 50 45.43 42.01 1. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm
BRL-CAD BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.32.6 VGR Performance Metric a b 90K 180K 270K 360K 450K 413004 406201 1. (CXX) g++ options: -std=c++11 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -pedantic -ldl -lm
ClickHouse ClickHouse is an open-source, high performance OLAP data management system. This test profile uses ClickHouse's standard benchmark recommendations per https://clickhouse.com/docs/en/operations/performance-test/ with the 100 million rows web analytics dataset. The reported value is the query processing time using the geometric mean of all queries performed. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, First Run / Cold Cache a b 60 120 180 240 300 241.18 253.55 MIN: 26.36 / MAX: 3529.41 MIN: 24.97 / MAX: 12000 1. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Second Run a b 60 120 180 240 300 260.83 271.13 MIN: 35.4 / MAX: 12000 MIN: 42.58 / MAX: 12000 1. ClickHouse server version 22.5.4.19 (official build).
OpenBenchmarking.org Queries Per Minute, Geo Mean, More Is Better ClickHouse 22.5.4.19 100M Rows Web Analytics Dataset, Third Run a b 60 120 180 240 300 272.09 256.64 MIN: 38.46 / MAX: 6000 MIN: 41.35 / MAX: 4285.71 1. ClickHouse server version 22.5.4.19 (official build).
Cpuminer-Opt Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 3.20.3 Algorithm: Magi a b 400 800 1200 1600 2000 1711.45 1702.45 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
EnCodec EnCodec is a Facebook/Meta developed AI means of compressing audio files using High Fidelity Neural Audio Compression. EnCodec is designed to provide codec compression at 6 kbps using their novel AI-powered compression technique. The test profile uses a lengthy JFK speech as the audio input for benchmarking and the performance measurement is measuring the time to encode the EnCodec file from WAV. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better EnCodec 0.1.1 Target Bandwidth: 3 kbps a b 15 30 45 60 75 66.47 68.29
Facebook RocksDB OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Random Read a b 40M 80M 120M 160M 200M 198794977 196826975 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read While Writing a b 1.3M 2.6M 3.9M 5.2M 6.5M 5807440 6220606 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
OpenBenchmarking.org Op/s, More Is Better Facebook RocksDB 7.5.3 Test: Read Random Write Random a b 300K 600K 900K 1200K 1500K 1551427 1558868 1. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread
FFmpeg This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Live a b 9 18 27 36 45 38.85 38.76 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Live a b 30 60 90 120 150 129.99 130.30 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Live a b 20 40 60 80 100 86.74 88.58 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Live a b 13 26 39 52 65 58.22 57.01 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Upload a b 70 140 210 280 350 314.58 315.29 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Upload a b 2 4 6 8 10 8.03 8.01 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Upload a b 50 100 150 200 250 241.65 240.52 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Upload a b 3 6 9 12 15 10.45 10.50 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Platform a b 50 100 150 200 250 249.01 250.99 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Platform a b 7 14 21 28 35 30.42 30.18 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Platform a b 80 160 240 320 400 356.59 356.16 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Platform a b 5 10 15 20 25 21.24 21.27 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Video On Demand a b 50 100 150 200 250 250.83 250.37 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx264 - Scenario: Video On Demand a b 7 14 21 28 35 30.20 30.25 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Seconds, Fewer Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Video On Demand a b 80 160 240 320 400 357.15 358.61 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 5.1.2 Encoder: libx265 - Scenario: Video On Demand a b 5 10 15 20 25 21.21 21.12 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Rotate a b 100 200 300 400 500 443 448 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Sharpen a b 120 240 360 480 600 561 555 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Enhanced a b 160 320 480 640 800 743 740 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Resizing a b 40 80 120 160 200 167 135 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: Noise-Gaussian a b 150 300 450 600 750 692 619 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
OpenBenchmarking.org Iterations Per Minute, More Is Better GraphicsMagick 1.3.38 Operation: HWB Color Space a b 200 400 600 800 1000 1107 957 1. (CC) gcc options: -fopenmp -O2 -ljbig -ltiff -lfreetype -ljpeg -lXext -lSM -lICE -lX11 -llzma -lbz2 -lxml2 -lz -lm -lpthread
JPEG XL Decoding libjxl The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is suited for JPEG XL decode performance testing to PNG output file, the pts/jpexl test is for encode performance. The JPEG XL encoding/decoding is done using the libjxl codebase. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better JPEG XL Decoding libjxl 0.7 CPU Threads: 1 a b 6 12 18 24 30 26.86 27.01
JPEG XL libjxl The JPEG XL Image Coding System is designed to provide next-generation JPEG image capabilities with JPEG XL offering better image quality and compression over legacy JPEG. This test profile is currently focused on the multi-threaded JPEG XL image encode performance using the reference libjxl library. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 80 a b 2 4 6 8 10 6.37 6.38 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 90 a b 1.3365 2.673 4.0095 5.346 6.6825 5.91 5.94 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: PNG - Quality: 100 a b 0.09 0.18 0.27 0.36 0.45 0.4 0.4 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
OpenBenchmarking.org MP/s, More Is Better JPEG XL libjxl 0.7 Input: JPEG - Quality: 100 a b 0.09 0.18 0.27 0.36 0.45 0.4 0.4 1. (CXX) g++ options: -fno-rtti -funwind-tables -O3 -O2 -fPIE -pie -lm -latomic
miniBUDE MiniBUDE is a mini application for the the core computation of the Bristol University Docking Engine (BUDE). This test profile currently makes use of the OpenMP implementation of miniBUDE for CPU benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFInst/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM1 a b 150 300 450 600 750 696.80 687.88 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.org Billion Interactions/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM1 a b 7 14 21 28 35 27.87 27.52 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.org GFInst/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM2 a b 160 320 480 640 800 753.17 742.30 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
OpenBenchmarking.org Billion Interactions/s, More Is Better miniBUDE 20210901 Implementation: OpenMP - Input Deck: BM2 a b 7 14 21 28 35 30.13 29.69 1. (CC) gcc options: -std=c99 -Ofast -ffast-math -fopenmp -march=native -lm
Mobile Neural Network MNN is the Mobile Neural Network as a highly efficient, lightweight deep learning framework developed by Alibaba. This MNN test profile is building the OpenMP / CPU threaded version for processor benchmarking and not any GPU-accelerated test. MNN does allow making use of AVX-512 extensions. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: nasnet a b 10 20 30 40 50 42.87 40.34 MIN: 37.41 / MAX: 68.89 MIN: 33.86 / MAX: 71.11 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenetV3 a b 1.2319 2.4638 3.6957 4.9276 6.1595 5.475 5.350 MIN: 5.32 / MAX: 6.04 MIN: 5.21 / MAX: 9.76 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: squeezenetv1.1 a b 3 6 9 12 15 10.63 10.15 MIN: 9.92 / MAX: 17.37 MIN: 9.78 / MAX: 10.82 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: resnet-v2-50 a b 11 22 33 44 55 50.82 47.12 MIN: 47.3 / MAX: 154.36 MIN: 42.84 / MAX: 126.19 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: SqueezeNetV1.0 a b 4 8 12 16 20 15.68 15.01 MIN: 15.23 / MAX: 18.11 MIN: 14.21 / MAX: 16.99 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: MobileNetV2_224 a b 3 6 9 12 15 9.862 9.957 MIN: 8.86 / MAX: 19.89 MIN: 9.44 / MAX: 24.23 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: mobilenet-v1-1.0 a b 2 4 6 8 10 7.677 7.386 MIN: 7.4 / MAX: 8.68 MIN: 6.86 / MAX: 15.47 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better Mobile Neural Network 2.1 Model: inception-v3 a b 13 26 39 52 65 56.23 56.10 MIN: 52.68 / MAX: 141.21 MIN: 52.83 / MAX: 181.73 1. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -rdynamic -pthread -ldl
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v2-v2 - Model: mobilenet-v2 a b 20 40 60 80 100 74.59 48.21 MIN: 33.31 / MAX: 533.66 MIN: 33.67 / MAX: 520.92 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU-v3-v3 - Model: mobilenet-v3 a b 13 26 39 52 65 45.30 58.67 MIN: 34.34 / MAX: 507.69 MIN: 35.55 / MAX: 509.29 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: shufflenet-v2 a b 14 28 42 56 70 64.00 43.74 MIN: 39.84 / MAX: 607.1 MIN: 41.64 / MAX: 214.3 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: mnasnet a b 12 24 36 48 60 51.66 41.96 MIN: 32.72 / MAX: 481.89 MIN: 33.28 / MAX: 465.54 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: efficientnet-b0 a b 13 26 39 52 65 55.09 56.92 MIN: 47.62 / MAX: 609.55 MIN: 48.95 / MAX: 685.42 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: blazeface a b 10 20 30 40 50 42.63 31.18 MIN: 20.26 / MAX: 315.89 MIN: 20.06 / MAX: 313.69 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: googlenet a b 20 40 60 80 100 111.20 100.95 MIN: 70.72 / MAX: 809.14 MIN: 69.07 / MAX: 774.08 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vgg16 a b 30 60 90 120 150 139.33 123.73 MIN: 87.74 / MAX: 205.11 MIN: 86.64 / MAX: 206.07 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet18 a b 20 40 60 80 100 81.24 63.20 MIN: 48.42 / MAX: 297.84 MIN: 44.6 / MAX: 292.13 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: alexnet a b 15 30 45 60 75 65.60 53.71 MIN: 37.25 / MAX: 140.97 MIN: 30.79 / MAX: 138.09 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: resnet50 a b 40 80 120 160 200 178.31 110.59 MIN: 95.64 / MAX: 698.94 MIN: 83.5 / MAX: 688.52 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: yolov4-tiny a b 20 40 60 80 100 75.13 78.03 MIN: 66.22 / MAX: 281.19 MIN: 65.59 / MAX: 285.39 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: squeezenet_ssd a b 20 40 60 80 100 100.38 94.37 MIN: 74.34 / MAX: 677.58 MIN: 69.13 / MAX: 706.07 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: regnety_400m a b 60 120 180 240 300 286.78 208.14 MIN: 186.63 / MAX: 3348.9 MIN: 182.64 / MAX: 1526.45 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: vision_transformer a b 70 140 210 280 350 319.23 298.72 MIN: 270.65 / MAX: 367.03 MIN: 248.9 / MAX: 3220.77 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20220729 Target: CPU - Model: FastestDet a b 20 40 60 80 100 76.99 51.75 MIN: 50.2 / MAX: 651.74 MIN: 47.57 / MAX: 300.51 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
nekRS nekRS is an open-source Navier Stokes solver based on the spectral element method. NekRS supports both CPU and GPU/accelerator support though this test profile is currently configured for CPU execution. NekRS is part of Nek5000 of the Mathematics and Computer Science MCS at Argonne National Laboratory. This nekRS benchmark is primarily relevant to large core count HPC servers and otherwise may be very time consuming. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FLOP/s, More Is Better nekRS 22.0 Input: TurboPipe Periodic a b 40000M 80000M 120000M 160000M 200000M 184723000000 184827000000 1. (CXX) g++ options: -fopenmp -O2 -march=native -mtune=native -ftree-vectorize -lmpi_cxx -lmpi
nginx This is a benchmark of the lightweight Nginx HTTP(S) web-server. This Nginx web server benchmark test profile makes use of the wrk program for facilitating the HTTP requests over a fixed period time with a configurable number of concurrent clients/connections. HTTPS with a self-signed OpenSSL certificate is used by this test for local benchmarking. Learn more via the OpenBenchmarking.org test page.
Connections: 1
a: The test quit with a non-zero exit status.
b: The test quit with a non-zero exit status.
Connections: 20
a: The test quit with a non-zero exit status.
b: The test quit with a non-zero exit status.
Connections: 100
a: The test quit with a non-zero exit status.
b: The test quit with a non-zero exit status.
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 200 a b 20K 40K 60K 80K 100K 100112.06 101709.29 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 500 a b 20K 40K 60K 80K 100K 99535.87 98639.62 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
OpenBenchmarking.org Requests Per Second, More Is Better nginx 1.23.2 Connections: 1000 a b 20K 40K 60K 80K 100K 96265.52 95346.38 1. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2
Connections: 4000
a: The test quit with a non-zero exit status.
b: The test quit with a non-zero exit status.
oneDNN OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU a b 1.1535 2.307 3.4605 4.614 5.7675 5.12686 5.03993 MIN: 4.73 MIN: 4.31 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU a b 5 10 15 20 25 20.14 20.28 MIN: 19.59 MIN: 19.81 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU a b 0.9532 1.9064 2.8596 3.8128 4.766 4.23625 3.81237 MIN: 3.68 MIN: 3.29 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU a b 1.0315 2.063 3.0945 4.126 5.1575 4.52585 4.58464 MIN: 4.25 MIN: 4.26 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU
a: The test run did not produce a result.
b: The test run did not produce a result.
Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU
a: The test run did not produce a result.
b: The test run did not produce a result.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU a b 5 10 15 20 25 21.03 20.06 MIN: 18.77 MIN: 18.98 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU a b 5 10 15 20 25 19.20 19.18 MIN: 17.49 MIN: 17.59 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU a b 2 4 6 8 10 6.0371 5.6399 MIN: 5.87 MIN: 5.49 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU a b 7 14 21 28 35 28.77 28.79 MIN: 26.6 MIN: 26.18 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU a b 1.042 2.084 3.126 4.168 5.21 4.14024 4.63109 MIN: 3.72 MIN: 4.26 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU a b 0.5659 1.1318 1.6977 2.2636 2.8295 2.51501 2.51294 MIN: 2.45 MIN: 2.44 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU a b 2K 4K 6K 8K 10K 7823.00 8504.29 MIN: 7444.07 MIN: 8211.26 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU a b 1100 2200 3300 4400 5500 5300.36 5167.81 MIN: 5196.31 MIN: 5104.36 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU a b 2K 4K 6K 8K 10K 7893.57 8148.11 MIN: 7395.65 MIN: 7996.39 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU
a: The test run did not produce a result.
b: The test run did not produce a result.
Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU
a: The test run did not produce a result.
b: The test run did not produce a result.
Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU
a: The test run did not produce a result.
b: The test run did not produce a result.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU a b 1100 2200 3300 4400 5500 5167.40 5128.18 MIN: 4891.41 MIN: 4632.83 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU a b 3 6 9 12 15 9.48918 7.74858 MIN: 8.93 MIN: 7.21 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU a b 2K 4K 6K 8K 10K 8045.70 8013.88 MIN: 7776.35 MIN: 6974.32 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU a b 1100 2200 3300 4400 5500 5157.60 5207.64 MIN: 5062.05 MIN: 5136.46 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
OpenBenchmarking.org ms, Fewer Is Better oneDNN 2.7 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU a b 7 14 21 28 35 29.77 28.90 MIN: 28.66 MIN: 27.84 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread
Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU
a: The test run did not produce a result.
b: The test run did not produce a result.
OpenFOAM OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time a b 8 16 24 32 40 34.10 34.19 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Execution Time a b 14 28 42 56 70 61.45 62.97 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Mesh Time a b 40 80 120 160 200 167.00 167.07 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Execution Time a b 120 240 360 480 600 541.95 542.45 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenRadioss OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2022.10.13 Model: Bumper Beam a b 20 40 60 80 100 108.51 108.31
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU a b 1.3163 2.6326 3.9489 5.2652 6.5815 5.85 5.82 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16 - Device: CPU a b 1200 2400 3600 4800 6000 5452.5 5492.0 MIN: 4562.64 / MAX: 10241.51 MIN: 4746.67 / MAX: 7784.35 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU a b 0.8775 1.755 2.6325 3.51 4.3875 3.90 3.89 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP16 - Device: CPU a b 2K 4K 6K 8K 10K 8087.11 8150.20 MIN: 7045.22 / MAX: 9970.66 MIN: 7213.66 / MAX: 9844.05 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU a b 0.8753 1.7506 2.6259 3.5012 4.3765 3.85 3.89 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Detection FP32 - Device: CPU a b 2K 4K 6K 8K 10K 8245.26 8119.63 MIN: 7461.03 / MAX: 9786.72 MIN: 7294.93 / MAX: 9612.39 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU a b 80 160 240 320 400 352.62 351.60 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16 - Device: CPU a b 20 40 60 80 100 90.64 90.91 MIN: 47.23 / MAX: 198.88 MIN: 53.11 / MAX: 177.44 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU a b 2 4 6 8 10 7.61 7.61 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Face Detection FP16-INT8 - Device: CPU a b 900 1800 2700 3600 4500 4182.46 4185.20 MIN: 4092.17 / MAX: 4381.6 MIN: 4084.91 / MAX: 4382.2 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU a b 130 260 390 520 650 614.56 613.31 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU a b 12 24 36 48 60 52.03 52.14 MIN: 40.33 / MAX: 91.5 MIN: 32.77 / MAX: 93.03 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU a b 140 280 420 560 700 626.61 626.42 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16 - Device: CPU a b 12 24 36 48 60 51.03 51.04 MIN: 26.64 / MAX: 92.7 MIN: 31.13 / MAX: 104.01 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU a b 14 28 42 56 70 59.76 61.27 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU a b 120 240 360 480 600 533.65 520.79 MIN: 247.13 / MAX: 2304.56 MIN: 296.95 / MAX: 2494.76 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b 160 320 480 640 800 751.84 751.30 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b 20 40 60 80 100 85.05 85.09 MIN: 45.26 / MAX: 130.84 MIN: 51.08 / MAX: 202.08 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU a b 110 220 330 440 550 513.24 508.70 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU a b 14 28 42 56 70 62.29 62.84 MIN: 33.1 / MAX: 180.25 MIN: 29.52 / MAX: 192.77 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b 3K 6K 9K 12K 15K 16068.58 16149.43 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b 0.8865 1.773 2.6595 3.546 4.4325 3.94 3.92 MIN: 1.94 / MAX: 58.65 MIN: 2.08 / MAX: 45.65 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b 4K 8K 12K 16K 20K 19469.80 19580.12 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2022.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b 0.7335 1.467 2.2005 2.934 3.6675 3.26 3.24 MIN: 1.85 / MAX: 36.31 MIN: 1.96 / MAX: 34.56 1. (CXX) g++ options: -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -pie -ldl
OSPRay Studio Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer a b 900 1800 2700 3600 4500 4233 4239 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer a b 900 1800 2700 3600 4500 4416 4403 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer a b 1100 2200 3300 4400 5500 5127 5130 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer a b 20K 40K 60K 80K 100K 78125 78264 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer a b 30K 60K 90K 120K 150K 145358 145660 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer a b 20K 40K 60K 80K 100K 80769 80340 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer a b 30K 60K 90K 120K 150K 150980 151192 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer a b 20K 40K 60K 80K 100K 91958 92675 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer a b 40K 80K 120K 160K 200K 173492 173590 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer a b 200 400 600 800 1000 1062 1060 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer a b 200 400 600 800 1000 1096 1100 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer a b 300 600 900 1200 1500 1282 1277 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer a b 4K 8K 12K 16K 20K 17188 17139 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer a b 7K 14K 21K 28K 35K 34160 34140 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer a b 4K 8K 12K 16K 20K 17838 17792 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer a b 8K 16K 24K 32K 40K 35696 35397 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer a b 4K 8K 12K 16K 20K 20597 20517 1. (CXX) g++ options: -O3 -lm -ldl
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.11 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer a b 9K 18K 27K 36K 45K 41292 41490 1. (CXX) g++ options: -O3 -lm -ldl
spaCy The spaCy library is an open-source solution for advanced neural language processing (NLP). The spaCy library leverages Python and is a leading neural language processing solution. This test profile times the spaCy CPU performance with various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org tokens/sec, More Is Better spaCy 3.4.1 Model: en_core_web_lg a b 1500 3000 4500 6000 7500 6798 6795
srsRAN srsRAN is an open-source LTE/5G software radio suite created by Software Radio Systems (SRS). The srsRAN radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Samples / Second, More Is Better srsRAN 22.04.1 Test: OFDM_Test a b 14M 28M 42M 56M 70M 65700000 66100000 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM a b 60 120 180 240 300 252.8 252.3 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM a b 20 40 60 80 100 86.2 85.9 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM a b 60 120 180 240 300 257.5 256.9 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM a b 20 40 60 80 100 93.7 93.6 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM a b 60 120 180 240 300 274.2 273.0 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM a b 20 40 60 80 100 92.0 91.5 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM a b 60 120 180 240 300 279.8 279.4 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM a b 20 40 60 80 100 98.9 99.0 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org eNb Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM a b 16 32 48 64 80 69.7 69.7 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org UE Mb/s, More Is Better srsRAN 22.04.1 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM a b 9 18 27 36 45 38.0 38.1 1. (CXX) g++ options: -std=c++14 -fno-strict-aliasing -march=native -mfpmath=sse -mavx2 -fvisibility=hidden -O3 -fno-trapping-math -fno-math-errno -ldl -lpthread -lm
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: NUMA a b 30 60 90 120 150 140.80 140.65 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Futex a b 200K 400K 600K 800K 1000K 802471.30 796565.52 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: MEMFD a b 400 800 1200 1600 2000 1751.42 1743.95 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Mutex a b 4M 8M 12M 16M 20M 17481256.34 17702491.79 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Atomic a b 40K 80K 120K 160K 200K 156802.93 172559.20 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Crypto a b 14K 28K 42K 56K 70K 67640.35 67622.54 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Malloc a b 30M 60M 90M 120M 150M 146984386.17 148029623.74 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Forking a b 4K 8K 12K 16K 20K 19768.12 19540.99 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: IO_uring a b 10K 20K 30K 40K 50K 45731.60 45863.96 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: SENDFILE a b 150K 300K 450K 600K 750K 678380.72 678714.22 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: CPU Cache a b 60 120 180 240 300 199.02 259.54 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: CPU Stress a b 20K 40K 60K 80K 100K 106647.20 107074.79 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Semaphores a b 1.6M 3.2M 4.8M 6.4M 8M 7247266.69 7261328.18 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Matrix Math a b 50K 100K 150K 200K 250K 215006.68 215110.79 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Vector Math a b 60K 120K 180K 240K 300K 266411.19 266535.54 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread
Test: x86_64 RdRand
a: The test run did not produce a result. E: stress-ng: error: [1466138] No stress workers invoked (one or more were unsupported)
b: The test run did not produce a result. E: stress-ng: error: [2538455] No stress workers invoked (one or more were unsupported)
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.14.06 Test: Memory Copying a b 500 1000 1500 2000 2500 2328.14 2485.61 1. (CC) gcc options: -O2 -std=gnu99 -lm -fuse-ld=gold -lapparmor -latomic -lc -lcrypt -ldl -ljpeg -lrt -lsctp -lz -pthread