Tests for a future article. Intel Xeon Gold 6421N testing with a Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x2b0000c0Java Notes: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
b Processor: Intel Xeon Gold 6421N @ 3.60GHz (32 Cores / 64 Threads), Motherboard: Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS), Chipset: Intel Device 1bce, Memory: 512GB, Disk: 3 x 3841GB Micron_9300_MTFDHAL3T8TDP, Graphics: ASPEED, Monitor: VGA HDMI, Network: 4 x Intel E810-C for QSFP
OS: Ubuntu 22.04, Kernel: 5.15.0-47-generic (x86_64), Desktop: GNOME Shell 42.4, Display Server: X Server 1.21.1.3, Vulkan: 1.2.204, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 1600x1200
new xeon OpenBenchmarking.org Phoronix Test Suite Intel Xeon Gold 6421N @ 3.60GHz (32 Cores / 64 Threads) Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS) Intel Device 1bce 512GB 3 x 3841GB Micron_9300_MTFDHAL3T8TDP ASPEED VGA HDMI 4 x Intel E810-C for QSFP Ubuntu 22.04 5.15.0-47-generic (x86_64) GNOME Shell 42.4 X Server 1.21.1.3 1.2.204 GCC 11.2.0 ext4 1600x1200 Processor Motherboard Chipset Memory Disk Graphics Monitor Network OS Kernel Desktop Display Server Vulkan Compiler File-System Screen Resolution New Xeon Benchmarks System Logs - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x2b0000c0 - OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04) - Python 3.10.6 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
a vs. b Comparison Phoronix Test Suite Baseline +9.5% +9.5% +19% +19% +28.5% +28.5% 37.8% 22.7% 7.1% 6.5% 6.2% 6.2% 5.8% 4.8% 4.4% 4.4% 4% 3.9% 3.2% 2.8% 2.8% 2.7% 2.7% 2.7% 2.3% 2% 100 - 100 - 200 100 - 100 - 200 26% CPU Cache 256 15.9% 200 - 100 - 200 100 - 100 - 500 500 - 1 - 500 6.2% c2c - Stock - double - 128 B.L.N.Q.A - A.M.S Redis - 100 - 1:10 6.2% 200 - 100 - 200 5.9% B.L.N.Q.A - A.M.S 100 - 100 - 500 5.4% 500 - 1 - 500 Cloning 4.4% N.T.C.B.b.u.S - A.M.S N.T.C.B.b.u.S - A.M.S 500 - 1 - 200 r2c - Stock - float - 256 500 - 1 - 200 3.6% c2c - FFTW - double - 128 3.4% Futex 3.3% P.P.B.T.T Pipe r2c - FFTW - float - 256 100 - 1 - 200 200 - 1 - 200 SENDFILE Redis - 100 - 1:5 2.6% Matrix Math 2.5% 500 - 100 - 500 2.5% 200 - 100 - 500 2.4% 200 - 1 - 500 2.4% 200 - 100 - 500 16 - 256 - 512 Apache IoTDB Apache IoTDB Stress-NG libxsmm Apache IoTDB Apache IoTDB Apache IoTDB HeFFTe - Highly Efficient FFT for Exascale Neural Magic DeepSparse Redis 7.0.12 + memtier_benchmark Apache IoTDB Neural Magic DeepSparse Apache IoTDB Apache IoTDB Stress-NG Neural Magic DeepSparse Neural Magic DeepSparse Apache IoTDB HeFFTe - Highly Efficient FFT for Exascale Apache IoTDB HeFFTe - Highly Efficient FFT for Exascale Stress-NG srsRAN Project Stress-NG HeFFTe - Highly Efficient FFT for Exascale Apache IoTDB Apache IoTDB Stress-NG Redis 7.0.12 + memtier_benchmark Stress-NG Apache IoTDB Apache IoTDB Apache IoTDB Apache IoTDB Liquid-DSP a b
new xeon heffte: c2c - Stock - double - 128 heffte: c2c - Stock - double - 512 heffte: r2c - FFTW - double - 512 heffte: r2c - Stock - float - 256 heffte: c2c - Stock - float - 256 heffte: r2c - FFTW - double - 128 laghos: Triple Point Problem libxsmm: 32 libxsmm: 64 libxsmm: 256 libxsmm: 128 stress-ng: Hash stress-ng: MMAP heffte: r2c - Stock - double - 256 laghos: Sedov Blast Wave, ube_922_hex.mesh palabos: 100 heffte: c2c - FFTW - float - 128 heffte: c2c - Stock - float - 128 heffte: c2c - FFTW - float - 256 heffte: c2c - Stock - float - 512 heffte: c2c - FFTW - float - 512 heffte: r2c - FFTW - double - 256 heffte: r2c - FFTW - float - 128 heffte: r2c - Stock - float - 128 heffte: r2c - FFTW - float - 256 heffte: r2c - Stock - float - 512 heffte: r2c - FFTW - float - 512 heffte: c2c - Stock - double - 256 heffte: c2c - FFTW - double - 128 heffte: r2c - Stock - double - 128 heffte: c2c - FFTW - double - 256 heffte: r2c - Stock - double - 512 heffte: c2c - FFTW - double - 512 palabos: 400 palabos: 500 stress-ng: NUMA stress-ng: Pipe stress-ng: Poll stress-ng: Zlib stress-ng: Futex stress-ng: MEMFD stress-ng: Mutex stress-ng: Atomic stress-ng: Crypto stress-ng: Malloc stress-ng: Cloning stress-ng: Forking stress-ng: Pthread stress-ng: AVL Tree stress-ng: IO_uring stress-ng: SENDFILE stress-ng: CPU Cache stress-ng: CPU Stress stress-ng: Semaphores stress-ng: Matrix Math stress-ng: Vector Math stress-ng: Function Call stress-ng: x86_64 RdRand stress-ng: Floating Point stress-ng: Matrix 3D Math stress-ng: Memory Copying stress-ng: Vector Shuffle stress-ng: Socket Activity stress-ng: Wide Vector Math stress-ng: Context Switching stress-ng: Fused Multiply-Add stress-ng: Vector Floating Point stress-ng: Glibc C String Functions stress-ng: Glibc Qsort Data Sorting stress-ng: System V Message Passing brl-cad: VGR Performance Metric deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream hpcg: 104 104 104 - 60 hpcg: 144 144 144 - 60 hpcg: 160 160 160 - 60 openfoam: drivaerFastback, Small Mesh Size - Mesh Time openfoam: drivaerFastback, Small Mesh Size - Execution Time openfoam: drivaerFastback, Medium Mesh Size - Mesh Time openfoam: drivaerFastback, Medium Mesh Size - Execution Time build-gdb: Time To Compile build-llvm: Ninja build-llvm: Unix Makefiles build-php: Time To Compile build-linux-kernel: defconfig build-linux-kernel: allmodconfig blender: BMW27 - CPU-Only blender: Classroom - CPU-Only blender: Fishy Cat - CPU-Only blender: Barbershop - CPU-Only blender: Pabellon Barcelona - CPU-Only vvenc: Bosphorus 4K - Fast vvenc: Bosphorus 4K - Faster vvenc: Bosphorus 1080p - Fast vvenc: Bosphorus 1080p - Faster liquid-dsp: 16 - 256 - 32 liquid-dsp: 16 - 256 - 57 liquid-dsp: 32 - 256 - 32 liquid-dsp: 32 - 256 - 57 liquid-dsp: 64 - 256 - 32 liquid-dsp: 64 - 256 - 57 liquid-dsp: 16 - 256 - 512 liquid-dsp: 32 - 256 - 512 liquid-dsp: 64 - 256 - 512 srsran: Downlink Processor Benchmark srsran: PUSCH Processor Benchmark, Throughput Total srsran: PUSCH Processor Benchmark, Throughput Thread apache-iotdb: 100 - 1 - 200 apache-iotdb: 100 - 1 - 200 apache-iotdb: 100 - 1 - 500 apache-iotdb: 100 - 1 - 500 apache-iotdb: 200 - 1 - 200 apache-iotdb: 200 - 1 - 200 apache-iotdb: 200 - 1 - 500 apache-iotdb: 200 - 1 - 500 apache-iotdb: 500 - 1 - 200 apache-iotdb: 500 - 1 - 200 apache-iotdb: 500 - 1 - 500 apache-iotdb: 500 - 1 - 500 apache-iotdb: 100 - 100 - 200 apache-iotdb: 100 - 100 - 200 apache-iotdb: 100 - 100 - 500 apache-iotdb: 100 - 100 - 500 apache-iotdb: 200 - 100 - 200 apache-iotdb: 200 - 100 - 200 apache-iotdb: 200 - 100 - 500 apache-iotdb: 200 - 100 - 500 apache-iotdb: 500 - 100 - 200 apache-iotdb: 500 - 100 - 200 apache-iotdb: 500 - 100 - 500 apache-iotdb: 500 - 100 - 500 memtier-benchmark: Redis - 50 - 1:5 memtier-benchmark: Redis - 100 - 1:5 memtier-benchmark: Redis - 50 - 1:10 memtier-benchmark: Redis - 100 - 1:10 cassandra: Writes a b 46.6357 40.7438 74.4734 157.867 75.0892 121.794 177.78 440.0 833.8 879.6 1211.8 5577252.32 861.28 76.9042 216.86 235.186 131.656 85.7398 76.0299 72.5609 78.8291 72.2893 207.244 149.935 149.825 137.536 141.41 38.9613 64.4263 92.3973 38.9304 76.6110 43.9665 287.268 300.276 390.87 35837711.85 3669281.69 2647.81 1541676.36 549.94 15147444.51 133.83 50240.09 99373474.31 9740.57 89918.21 136846.01 294.26 1529665.98 582724.63 1537111.20 64111.11 62126446.21 160653.44 151386.31 22028.03 331416.52 10587.48 9599.93 7176.19 167204.21 24947.14 1745029.27 2572801.75 34197705.63 58243.38 26067360.60 696.65 5852281.71 466686 34.5311 460.7818 1074.8218 14.8600 390.9076 40.9109 121.6893 131.4497 479.7876 33.3278 3227.0954 4.9416 208.8975 76.5597 35.1529 453.4802 478.9108 33.3894 208.8471 76.5807 295.8295 54.0674 46.3307 345.1491 504.6114 31.6750 137.3780 116.3761 33.9358 468.8046 27.7808 27.4213 27.5086 27.965214 67.707331 144.69646 615.99074 41.905 263.154 323.856 42.351 40.438 445.385 47.15 127.78 64.07 493.45 159.94 5.842 11.020 16.100 30.946 557945000 848435000 847085000 1328100000 1577300000 1728850000 243940000 383555000 513135000 705.8 5372.9 240.4 710382.44 14.58 1191500.88 28.27 1045806.81 11.86 1505080.34 26.29 1576432.25 9.49 1916642.9 22.97 43074031.84 31.83 59041436.64 69.08 54224351.1 29.54 45677447.24 101.25 56894390.61 31.58 67607191.64 68.34 2211638.65 2285996.17 2316281.26 2447092.01 155626 49.5230 40.6648 74.7148 164.047 74.9286 122.460 176.92 444.6 839.9 758.9 1225.0 5583978.14 856.14 77.0345 217.19 234.874 130.982 85.4850 75.3001 72.5391 78.9605 72.1981 206.217 151.803 154.053 137.740 141.193 38.6757 62.2974 90.9851 38.5182 76.6041 44.0064 285.761 300.855 392.08 36852791.12 3671617.97 2648.81 1492979.46 549.55 15192892.59 132.61 50243.48 99251227.28 9326.09 89966.29 136709.81 294.66 1503623.79 598173.56 1885833.11 64118.87 61651485.43 156668.43 151431.15 22106.49 331423.04 10601.10 9605.30 7180.43 167202.07 25282.31 1750003.43 2571092.69 34050669.23 58232.70 26125214.84 696.92 5854201.78 34.5539 460.7588 1075.9571 14.8473 391.9125 40.8061 122.0367 131.0664 480.5223 33.2781 3233.9588 4.9312 208.9908 76.4684 37.3253 428.6695 479.2241 33.3680 211.2270 75.7218 299.9277 53.3291 46.5491 343.5170 505.1309 31.6488 143.4387 111.4976 34.5447 460.6707 27.8405 27.3890 27.3978 27.948717 67.563163 144.93674 615.46018 42.006 262.884 319.852 42.382 40.451 445.380 47.22 127.76 64.01 493.61 5.917 10.992 16.249 30.927 558655000 862195000 847675000 1323900000 1576850000 1733700000 248820000 378650000 513040000 710.9 5543.7 236.3 697217.55 14.98 1185338.02 28.45 1042859.03 12.18 1469808.89 26.64 1521587.4 9.87 2009050.46 21.63 34191814.86 43.86 56018457.87 73.56 51199962.11 31.63 46726912.46 98.87 56137174.7 31.69 65935725.67 68.01 2217192.12 2227152.02 2293467.62 2304730.19 OpenBenchmarking.org
HeFFTe - Highly Efficient FFT for Exascale HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOP/s, More Is Better HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: c2c - Backend: Stock - Precision: double - X Y Z: 128 a b 11 22 33 44 55 SE +/- 0.26, N = 2 SE +/- 3.39, N = 2 46.64 49.52 1. (CXX) g++ options: -O3
Laghos Laghos (LAGrangian High-Order Solver) is a miniapp that solves the time-dependent Euler equations of compressible gas dynamics in a moving Lagrangian frame using unstructured high-order finite element spatial discretization and explicit high-order time-stepping. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Major Kernels Total Rate, More Is Better Laghos 3.1 Test: Triple Point Problem a b 40 80 120 160 200 SE +/- 0.13, N = 2 SE +/- 0.02, N = 2 177.78 176.92 1. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi
libxsmm Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 32 a b 100 200 300 400 500 SE +/- 0.25, N = 2 SE +/- 0.15, N = 2 440.0 444.6 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 64 a b 200 400 600 800 1000 SE +/- 1.05, N = 2 SE +/- 0.20, N = 2 833.8 839.9 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 256 a b 200 400 600 800 1000 SE +/- 0.65, N = 2 SE +/- 5.75, N = 2 879.6 758.9 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 128 a b 300 600 900 1200 1500 SE +/- 4.60, N = 2 SE +/- 1.10, N = 2 1211.8 1225.0 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
HeFFTe - Highly Efficient FFT for Exascale HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOP/s, More Is Better HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: r2c - Backend: Stock - Precision: double - X Y Z: 256 a b 20 40 60 80 100 SE +/- 0.40, N = 2 SE +/- 0.65, N = 2 76.90 77.03 1. (CXX) g++ options: -O3
Laghos Laghos (LAGrangian High-Order Solver) is a miniapp that solves the time-dependent Euler equations of compressible gas dynamics in a moving Lagrangian frame using unstructured high-order finite element spatial discretization and explicit high-order time-stepping. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Major Kernels Total Rate, More Is Better Laghos 3.1 Test: Sedov Blast Wave, ube_922_hex.mesh a b 50 100 150 200 250 SE +/- 0.24, N = 2 SE +/- 0.18, N = 2 216.86 217.19 1. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi
Palabos The Palabos library is a framework for general purpose Computational Fluid Dynamics (CFD). Palabos uses a kernel based on the Lattice Boltzmann method. This test profile uses the Palabos MPI-based Cavity3D benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mega Site Updates Per Second, More Is Better Palabos 2.3 Grid Size: 100 a b 50 100 150 200 250 SE +/- 0.02, N = 2 SE +/- 0.34, N = 2 235.19 234.87 1. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm
HeFFTe - Highly Efficient FFT for Exascale HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOP/s, More Is Better HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: c2c - Backend: FFTW - Precision: float - X Y Z: 128 a b 30 60 90 120 150 SE +/- 0.77, N = 2 SE +/- 0.61, N = 2 131.66 130.98 1. (CXX) g++ options: -O3
Palabos The Palabos library is a framework for general purpose Computational Fluid Dynamics (CFD). Palabos uses a kernel based on the Lattice Boltzmann method. This test profile uses the Palabos MPI-based Cavity3D benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mega Site Updates Per Second, More Is Better Palabos 2.3 Grid Size: 400 a b 60 120 180 240 300 SE +/- 0.49, N = 2 SE +/- 1.54, N = 2 287.27 285.76 1. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm
OpenBenchmarking.org Mega Site Updates Per Second, More Is Better Palabos 2.3 Grid Size: 500 a b 70 140 210 280 350 SE +/- 1.63, N = 2 SE +/- 1.17, N = 2 300.28 300.86 1. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Pipe a b 8M 16M 24M 32M 40M SE +/- 1105250.10, N = 2 SE +/- 79631.10, N = 2 35837711.85 36852791.12 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Poll a b 800K 1600K 2400K 3200K 4000K SE +/- 2536.76, N = 2 SE +/- 1953.54, N = 2 3669281.69 3671617.97 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Futex a b 300K 600K 900K 1200K 1500K SE +/- 56630.43, N = 2 SE +/- 45385.58, N = 2 1541676.36 1492979.46 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Mutex a b 3M 6M 9M 12M 15M SE +/- 23940.47, N = 2 SE +/- 2864.48, N = 2 15147444.51 15192892.59 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Crypto a b 11K 22K 33K 44K 55K SE +/- 3.65, N = 2 SE +/- 18.13, N = 2 50240.09 50243.48 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Malloc a b 20M 40M 60M 80M 100M SE +/- 129754.02, N = 2 SE +/- 83929.32, N = 2 99373474.31 99251227.28 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Forking a b 20K 40K 60K 80K 100K SE +/- 469.20, N = 2 SE +/- 421.24, N = 2 89918.21 89966.29 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Pthread a b 30K 60K 90K 120K 150K SE +/- 971.78, N = 2 SE +/- 102.07, N = 2 136846.01 136709.81 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: IO_uring a b 300K 600K 900K 1200K 1500K SE +/- 22482.34, N = 2 SE +/- 5229.94, N = 2 1529665.98 1503623.79 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: SENDFILE a b 130K 260K 390K 520K 650K SE +/- 6799.74, N = 2 SE +/- 243.97, N = 2 582724.63 598173.56 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: CPU Cache a b 400K 800K 1200K 1600K 2000K SE +/- 31294.95, N = 2 SE +/- 234949.06, N = 2 1537111.20 1885833.11 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: CPU Stress a b 14K 28K 42K 56K 70K SE +/- 12.73, N = 2 SE +/- 38.95, N = 2 64111.11 64118.87 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Semaphores a b 13M 26M 39M 52M 65M SE +/- 2077286.42, N = 2 SE +/- 466593.23, N = 2 62126446.21 61651485.43 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Matrix Math a b 30K 60K 90K 120K 150K SE +/- 2867.57, N = 2 SE +/- 332.46, N = 2 160653.44 156668.43 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Vector Math a b 30K 60K 90K 120K 150K SE +/- 47.16, N = 2 SE +/- 5.98, N = 2 151386.31 151431.15 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Function Call a b 5K 10K 15K 20K 25K SE +/- 80.03, N = 2 SE +/- 74.09, N = 2 22028.03 22106.49 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: x86_64 RdRand a b 70K 140K 210K 280K 350K SE +/- 2.35, N = 2 SE +/- 1.14, N = 2 331416.52 331423.04 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Floating Point a b 2K 4K 6K 8K 10K SE +/- 1.07, N = 2 SE +/- 17.77, N = 2 10587.48 10601.10 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Matrix 3D Math a b 2K 4K 6K 8K 10K SE +/- 34.45, N = 2 SE +/- 4.08, N = 2 9599.93 9605.30 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Memory Copying a b 1500 3000 4500 6000 7500 SE +/- 8.71, N = 2 SE +/- 11.04, N = 2 7176.19 7180.43 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Vector Shuffle a b 40K 80K 120K 160K 200K SE +/- 6.63, N = 2 SE +/- 6.04, N = 2 167204.21 167202.07 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Socket Activity a b 5K 10K 15K 20K 25K SE +/- 72.57, N = 2 SE +/- 267.39, N = 2 24947.14 25282.31 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Wide Vector Math a b 400K 800K 1200K 1600K 2000K SE +/- 918.08, N = 2 SE +/- 4139.63, N = 2 1745029.27 1750003.43 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Context Switching a b 600K 1200K 1800K 2400K 3000K SE +/- 678.57, N = 2 SE +/- 604.17, N = 2 2572801.75 2571092.69 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Fused Multiply-Add a b 7M 14M 21M 28M 35M SE +/- 137631.48, N = 2 SE +/- 285.63, N = 2 34197705.63 34050669.23 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Vector Floating Point a b 12K 24K 36K 48K 60K SE +/- 30.71, N = 2 SE +/- 4.11, N = 2 58243.38 58232.70 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Glibc C String Functions a b 6M 12M 18M 24M 30M SE +/- 150617.25, N = 2 SE +/- 69329.81, N = 2 26067360.60 26125214.84 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Glibc Qsort Data Sorting a b 150 300 450 600 750 SE +/- 0.40, N = 2 SE +/- 0.46, N = 2 696.65 696.92 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: System V Message Passing a b 1.3M 2.6M 3.9M 5.2M 6.5M SE +/- 7174.98, N = 2 SE +/- 9802.94, N = 2 5852281.71 5854201.78 1. (CXX) g++ options: -O2 -std=gnu99 -lc
BRL-CAD BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.36 VGR Performance Metric a 100K 200K 300K 400K 500K SE +/- 3768.50, N = 2 466686 1. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6
Neural Magic DeepSparse OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b 8 16 24 32 40 SE +/- 0.06, N = 2 SE +/- 0.12, N = 2 34.53 34.55
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream a b 100 200 300 400 500 SE +/- 0.42, N = 2 SE +/- 2.44, N = 2 460.78 460.76
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b 200 400 600 800 1000 SE +/- 0.57, N = 2 SE +/- 1.01, N = 2 1074.82 1075.96
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream a b 90 180 270 360 450 SE +/- 1.01, N = 2 SE +/- 0.12, N = 2 390.91 391.91
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream a b 9 18 27 36 45 SE +/- 0.11, N = 2 SE +/- 0.01, N = 2 40.91 40.81
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b 30 60 90 120 150 SE +/- 0.05, N = 2 SE +/- 0.22, N = 2 121.69 122.04
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream a b 30 60 90 120 150 SE +/- 0.05, N = 2 SE +/- 0.22, N = 2 131.45 131.07
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b 100 200 300 400 500 SE +/- 1.46, N = 2 SE +/- 0.20, N = 2 468.80 460.67
OpenFOAM OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time a b 7 14 21 28 35 SE +/- 0.02, N = 2 SE +/- 0.05, N = 2 27.97 27.95 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Execution Time a b 15 30 45 60 75 SE +/- 0.09, N = 2 SE +/- 0.11, N = 2 67.71 67.56 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Mesh Time a b 30 60 90 120 150 SE +/- 0.01, N = 2 SE +/- 0.08, N = 2 144.70 144.94 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Execution Time a b 130 260 390 520 650 SE +/- 0.42, N = 2 SE +/- 0.03, N = 2 615.99 615.46 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
Blender OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: BMW27 - Compute: CPU-Only a b 11 22 33 44 55 SE +/- 0.02, N = 2 SE +/- 0.08, N = 2 47.15 47.22
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Classroom - Compute: CPU-Only a b 30 60 90 120 150 SE +/- 0.05, N = 2 SE +/- 0.13, N = 2 127.78 127.76
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Barbershop - Compute: CPU-Only a b 110 220 330 440 550 SE +/- 0.22, N = 2 SE +/- 0.42, N = 2 493.45 493.61
VVenC VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Fast a b 1.3313 2.6626 3.9939 5.3252 6.6565 SE +/- 0.074, N = 2 SE +/- 0.015, N = 2 5.842 5.917 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Faster a b 3 6 9 12 15 SE +/- 0.00, N = 2 SE +/- 0.03, N = 2 11.02 10.99 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Fast a b 4 8 12 16 20 SE +/- 0.17, N = 2 SE +/- 0.02, N = 2 16.10 16.25 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Faster a b 7 14 21 28 35 SE +/- 0.06, N = 2 SE +/- 0.04, N = 2 30.95 30.93 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 32 a b 120M 240M 360M 480M 600M SE +/- 2065000.00, N = 2 SE +/- 605000.00, N = 2 557945000 558655000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 57 a b 200M 400M 600M 800M 1000M SE +/- 14365000.00, N = 2 SE +/- 695000.00, N = 2 848435000 862195000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 32 a b 200M 400M 600M 800M 1000M SE +/- 25000.00, N = 2 SE +/- 85000.00, N = 2 847085000 847675000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 57 a b 300M 600M 900M 1200M 1500M SE +/- 300000.00, N = 2 SE +/- 4400000.00, N = 2 1328100000 1323900000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 32 a b 300M 600M 900M 1200M 1500M SE +/- 300000.00, N = 2 SE +/- 450000.00, N = 2 1577300000 1576850000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 57 a b 400M 800M 1200M 1600M 2000M SE +/- 550000.00, N = 2 SE +/- 900000.00, N = 2 1728850000 1733700000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 512 a b 50M 100M 150M 200M 250M SE +/- 1950000.00, N = 2 SE +/- 3170000.00, N = 2 243940000 248820000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 512 a b 80M 160M 240M 320M 400M SE +/- 1955000.00, N = 2 SE +/- 4920000.00, N = 2 383555000 378650000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 512 a b 110M 220M 330M 440M 550M SE +/- 385000.00, N = 2 SE +/- 800000.00, N = 2 513135000 513040000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
srsRAN Project srsRAN Project is a complete ORAN-native 5G RAN solution created by Software Radio Systems (SRS). The srsRAN Project radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mbps, More Is Better srsRAN Project 23.5 Test: Downlink Processor Benchmark a b 150 300 450 600 750 SE +/- 5.15, N = 2 SE +/- 1.60, N = 2 705.8 710.9 1. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
OpenBenchmarking.org Mbps, More Is Better srsRAN Project 23.5 Test: PUSCH Processor Benchmark, Throughput Total a b 1200 2400 3600 4800 6000 SE +/- 143.30, N = 2 SE +/- 95.40, N = 2 5372.9 5543.7 1. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
OpenBenchmarking.org Mbps, More Is Better srsRAN Project 23.5 Test: PUSCH Processor Benchmark, Throughput Thread a b 50 100 150 200 250 SE +/- 3.55, N = 2 SE +/- 0.10, N = 2 240.4 236.3 1. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 a b 500K 1000K 1500K 2000K 2500K SE +/- 6000.63, N = 2 SE +/- 3990.38, N = 2 2285996.17 2227152.02 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 a b 500K 1000K 1500K 2000K 2500K SE +/- 13610.76, N = 2 SE +/- 4548.93, N = 2 2316281.26 2293467.62 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5
a: The test run did not produce a result.
b: The test run did not produce a result.
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 a b 500K 1000K 1500K 2000K 2500K SE +/- 114392.77, N = 2 SE +/- 12975.09, N = 2 2447092.01 2304730.19 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10
a: The test run did not produce a result.
b: The test run did not produce a result.
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x2b0000c0Java Notes: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 30 July 2023 19:35 by user phoronix.
b Processor: Intel Xeon Gold 6421N @ 3.60GHz (32 Cores / 64 Threads), Motherboard: Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS), Chipset: Intel Device 1bce, Memory: 512GB, Disk: 3 x 3841GB Micron_9300_MTFDHAL3T8TDP, Graphics: ASPEED, Monitor: VGA HDMI, Network: 4 x Intel E810-C for QSFP
OS: Ubuntu 22.04, Kernel: 5.15.0-47-generic (x86_64), Desktop: GNOME Shell 42.4, Display Server: X Server 1.21.1.3, Vulkan: 1.2.204, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 1600x1200
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x2b0000c0Java Notes: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 31 July 2023 05:12 by user phoronix.