Intel Xeon Gold 6421N testing with a Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x2b0000c0Java Notes: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
b Processor: Intel Xeon Gold 6421N @ 3.60GHz (32 Cores / 64 Threads), Motherboard: Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS), Chipset: Intel Device 1bce, Memory: 512GB, Disk: 3 x 3841GB Micron_9300_MTFDHAL3T8TDP, Graphics: ASPEED, Monitor: VGA HDMI, Network: 4 x Intel E810-C for QSFP
OS: Ubuntu 22.04, Kernel: 5.15.0-47-generic (x86_64), Desktop: GNOME Shell 42.4, Display Server: X Server 1.21.1.3, Vulkan: 1.2.204, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 1600x1200
new xeon OpenBenchmarking.org Phoronix Test Suite Intel Xeon Gold 6421N @ 3.60GHz (32 Cores / 64 Threads) Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS) Intel Device 1bce 512GB 3 x 3841GB Micron_9300_MTFDHAL3T8TDP ASPEED VGA HDMI 4 x Intel E810-C for QSFP Ubuntu 22.04 5.15.0-47-generic (x86_64) GNOME Shell 42.4 X Server 1.21.1.3 1.2.204 GCC 11.2.0 ext4 1600x1200 Processor Motherboard Chipset Memory Disk Graphics Monitor Network OS Kernel Desktop Display Server Vulkan Compiler File-System Screen Resolution New Xeon Benchmarks System Logs - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x2b0000c0 - OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04) - Python 3.10.6 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
a vs. b Comparison Phoronix Test Suite Baseline +9.5% +9.5% +19% +19% +28.5% +28.5% 37.8% 22.7% 7.1% 6.5% 6.2% 6.2% 5.8% 4.8% 4.4% 4.4% 4% 3.9% 3.2% 2.8% 2.8% 2.7% 2.7% 2.7% 2.3% 2% 100 - 100 - 200 100 - 100 - 200 26% CPU Cache 256 15.9% 200 - 100 - 200 100 - 100 - 500 500 - 1 - 500 6.2% c2c - Stock - double - 128 B.L.N.Q.A - A.M.S Redis - 100 - 1:10 6.2% 200 - 100 - 200 5.9% B.L.N.Q.A - A.M.S 100 - 100 - 500 5.4% 500 - 1 - 500 Cloning 4.4% N.T.C.B.b.u.S - A.M.S N.T.C.B.b.u.S - A.M.S 500 - 1 - 200 r2c - Stock - float - 256 500 - 1 - 200 3.6% c2c - FFTW - double - 128 3.4% Futex 3.3% P.P.B.T.T Pipe r2c - FFTW - float - 256 100 - 1 - 200 200 - 1 - 200 SENDFILE Redis - 100 - 1:5 2.6% Matrix Math 2.5% 500 - 100 - 500 2.5% 200 - 100 - 500 2.4% 200 - 1 - 500 2.4% 200 - 100 - 500 16 - 256 - 512 Apache IoTDB Apache IoTDB Stress-NG libxsmm Apache IoTDB Apache IoTDB Apache IoTDB HeFFTe - Highly Efficient FFT for Exascale Neural Magic DeepSparse Redis 7.0.12 + memtier_benchmark Apache IoTDB Neural Magic DeepSparse Apache IoTDB Apache IoTDB Stress-NG Neural Magic DeepSparse Neural Magic DeepSparse Apache IoTDB HeFFTe - Highly Efficient FFT for Exascale Apache IoTDB HeFFTe - Highly Efficient FFT for Exascale Stress-NG srsRAN Project Stress-NG HeFFTe - Highly Efficient FFT for Exascale Apache IoTDB Apache IoTDB Stress-NG Redis 7.0.12 + memtier_benchmark Stress-NG Apache IoTDB Apache IoTDB Apache IoTDB Apache IoTDB Liquid-DSP a b
new xeon apache-iotdb: 100 - 1 - 200 apache-iotdb: 100 - 1 - 500 apache-iotdb: 200 - 1 - 200 apache-iotdb: 200 - 1 - 500 apache-iotdb: 500 - 1 - 200 apache-iotdb: 500 - 1 - 500 apache-iotdb: 100 - 100 - 200 apache-iotdb: 100 - 100 - 500 apache-iotdb: 200 - 100 - 200 apache-iotdb: 200 - 100 - 500 apache-iotdb: 500 - 100 - 200 apache-iotdb: 500 - 100 - 500 stress-ng: Hash stress-ng: MMAP stress-ng: NUMA stress-ng: Pipe stress-ng: Poll stress-ng: Zlib stress-ng: Futex stress-ng: MEMFD stress-ng: Mutex stress-ng: Atomic stress-ng: Crypto stress-ng: Malloc stress-ng: Cloning stress-ng: Forking stress-ng: Pthread stress-ng: AVL Tree stress-ng: IO_uring stress-ng: SENDFILE stress-ng: CPU Cache stress-ng: CPU Stress stress-ng: Semaphores stress-ng: Matrix Math stress-ng: Vector Math stress-ng: Function Call stress-ng: x86_64 RdRand stress-ng: Floating Point stress-ng: Matrix 3D Math stress-ng: Memory Copying stress-ng: Vector Shuffle stress-ng: Socket Activity stress-ng: Wide Vector Math stress-ng: Context Switching stress-ng: Fused Multiply-Add stress-ng: Vector Floating Point stress-ng: Glibc C String Functions stress-ng: Glibc Qsort Data Sorting stress-ng: System V Message Passing vvenc: Bosphorus 4K - Fast vvenc: Bosphorus 4K - Faster vvenc: Bosphorus 1080p - Fast vvenc: Bosphorus 1080p - Faster hpcg: 104 104 104 - 60 hpcg: 144 144 144 - 60 hpcg: 160 160 160 - 60 heffte: c2c - FFTW - float - 128 heffte: c2c - FFTW - float - 256 heffte: c2c - FFTW - float - 512 heffte: r2c - FFTW - float - 128 heffte: r2c - FFTW - float - 256 heffte: r2c - FFTW - float - 512 heffte: c2c - FFTW - double - 128 heffte: c2c - FFTW - double - 256 heffte: c2c - FFTW - double - 512 heffte: c2c - Stock - float - 128 heffte: c2c - Stock - float - 256 heffte: c2c - Stock - float - 512 heffte: r2c - FFTW - double - 128 heffte: r2c - FFTW - double - 256 heffte: r2c - FFTW - double - 512 heffte: r2c - Stock - float - 128 heffte: r2c - Stock - float - 256 heffte: r2c - Stock - float - 512 heffte: c2c - Stock - double - 128 heffte: c2c - Stock - double - 256 heffte: c2c - Stock - double - 512 heffte: r2c - Stock - double - 128 heffte: r2c - Stock - double - 256 heffte: r2c - Stock - double - 512 libxsmm: 128 libxsmm: 256 libxsmm: 32 libxsmm: 64 deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream laghos: Triple Point Problem laghos: Sedov Blast Wave, ube_922_hex.mesh srsran: Downlink Processor Benchmark srsran: PUSCH Processor Benchmark, Throughput Total srsran: PUSCH Processor Benchmark, Throughput Thread palabos: 100 palabos: 400 palabos: 500 cassandra: Writes memtier-benchmark: Redis - 50 - 1:5 memtier-benchmark: Redis - 100 - 1:5 memtier-benchmark: Redis - 50 - 1:10 memtier-benchmark: Redis - 100 - 1:10 apache-iotdb: 100 - 1 - 200 apache-iotdb: 100 - 1 - 500 apache-iotdb: 200 - 1 - 200 apache-iotdb: 200 - 1 - 500 apache-iotdb: 500 - 1 - 200 apache-iotdb: 500 - 1 - 500 apache-iotdb: 100 - 100 - 200 apache-iotdb: 100 - 100 - 500 apache-iotdb: 200 - 100 - 200 apache-iotdb: 200 - 100 - 500 apache-iotdb: 500 - 100 - 200 apache-iotdb: 500 - 100 - 500 liquid-dsp: 16 - 256 - 32 liquid-dsp: 16 - 256 - 57 liquid-dsp: 32 - 256 - 32 liquid-dsp: 32 - 256 - 57 liquid-dsp: 64 - 256 - 32 liquid-dsp: 64 - 256 - 57 liquid-dsp: 16 - 256 - 512 liquid-dsp: 32 - 256 - 512 liquid-dsp: 64 - 256 - 512 brl-cad: VGR Performance Metric deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream openfoam: drivaerFastback, Small Mesh Size - Mesh Time openfoam: drivaerFastback, Small Mesh Size - Execution Time openfoam: drivaerFastback, Medium Mesh Size - Mesh Time openfoam: drivaerFastback, Medium Mesh Size - Execution Time build-gdb: Time To Compile build-linux-kernel: defconfig build-linux-kernel: allmodconfig build-llvm: Ninja build-llvm: Unix Makefiles build-php: Time To Compile blender: BMW27 - CPU-Only blender: Classroom - CPU-Only blender: Fishy Cat - CPU-Only blender: Barbershop - CPU-Only blender: Pabellon Barcelona - CPU-Only a b 14.58 28.27 11.86 26.29 9.49 22.97 31.83 69.08 29.54 101.25 31.58 68.34 5577252.32 861.28 390.87 35837711.85 3669281.69 2647.81 1541676.36 549.94 15147444.51 133.83 50240.09 99373474.31 9740.57 89918.21 136846.01 294.26 1529665.98 582724.63 1537111.20 64111.11 62126446.21 160653.44 151386.31 22028.03 331416.52 10587.48 9599.93 7176.19 167204.21 24947.14 1745029.27 2572801.75 34197705.63 58243.38 26067360.60 696.65 5852281.71 5.842 11.020 16.100 30.946 27.7808 27.4213 27.5086 131.656 76.0299 78.8291 207.244 149.825 141.41 64.4263 38.9304 43.9665 85.7398 75.0892 72.5609 121.794 72.2893 74.4734 149.935 157.867 137.536 46.6357 38.9613 40.7438 92.3973 76.9042 76.6110 1211.8 879.6 440.0 833.8 34.5311 1074.8218 390.9076 121.6893 479.7876 3227.0954 208.8975 35.1529 478.9108 208.8471 295.8295 46.3307 504.6114 137.3780 33.9358 177.78 216.86 705.8 5372.9 240.4 235.186 287.268 300.276 155626 2211638.65 2285996.17 2316281.26 2447092.01 710382.44 1191500.88 1045806.81 1505080.34 1576432.25 1916642.9 43074031.84 59041436.64 54224351.1 45677447.24 56894390.61 67607191.64 557945000 848435000 847085000 1328100000 1577300000 1728850000 243940000 383555000 513135000 466686 460.7818 14.8600 40.9109 131.4497 33.3278 4.9416 76.5597 453.4802 33.3894 76.5807 54.0674 345.1491 31.6750 116.3761 468.8046 27.965214 67.707331 144.69646 615.99074 41.905 40.438 445.385 263.154 323.856 42.351 47.15 127.78 64.07 493.45 159.94 14.98 28.45 12.18 26.64 9.87 21.63 43.86 73.56 31.63 98.87 31.69 68.01 5583978.14 856.14 392.08 36852791.12 3671617.97 2648.81 1492979.46 549.55 15192892.59 132.61 50243.48 99251227.28 9326.09 89966.29 136709.81 294.66 1503623.79 598173.56 1885833.11 64118.87 61651485.43 156668.43 151431.15 22106.49 331423.04 10601.10 9605.30 7180.43 167202.07 25282.31 1750003.43 2571092.69 34050669.23 58232.70 26125214.84 696.92 5854201.78 5.917 10.992 16.249 30.927 27.8405 27.3890 27.3978 130.982 75.3001 78.9605 206.217 154.053 141.193 62.2974 38.5182 44.0064 85.4850 74.9286 72.5391 122.460 72.1981 74.7148 151.803 164.047 137.740 49.5230 38.6757 40.6648 90.9851 77.0345 76.6041 1225.0 758.9 444.6 839.9 34.5539 1075.9571 391.9125 122.0367 480.5223 3233.9588 208.9908 37.3253 479.2241 211.2270 299.9277 46.5491 505.1309 143.4387 34.5447 176.92 217.19 710.9 5543.7 236.3 234.874 285.761 300.855 2217192.12 2227152.02 2293467.62 2304730.19 697217.55 1185338.02 1042859.03 1469808.89 1521587.4 2009050.46 34191814.86 56018457.87 51199962.11 46726912.46 56137174.7 65935725.67 558655000 862195000 847675000 1323900000 1576850000 1733700000 248820000 378650000 513040000 460.7588 14.8473 40.8061 131.0664 33.2781 4.9312 76.4684 428.6695 33.3680 75.7218 53.3291 343.5170 31.6488 111.4976 460.6707 27.948717 67.563163 144.93674 615.46018 42.006 40.451 445.380 262.884 319.852 42.382 47.22 127.76 64.01 493.61 OpenBenchmarking.org
Redis 7.0.12 + memtier_benchmark Memtier_benchmark is a NoSQL Redis/Memcache traffic generation plus benchmarking tool developed by Redis Labs. Learn more via the OpenBenchmarking.org test page.
Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5
a: The test run did not produce a result.
b: The test run did not produce a result.
Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:10
a: The test run did not produce a result.
b: The test run did not produce a result.
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Pipe b a 8M 16M 24M 32M 40M SE +/- 79631.10, N = 2 SE +/- 1105250.10, N = 2 36852791.12 35837711.85 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Poll b a 800K 1600K 2400K 3200K 4000K SE +/- 1953.54, N = 2 SE +/- 2536.76, N = 2 3671617.97 3669281.69 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Futex a b 300K 600K 900K 1200K 1500K SE +/- 56630.43, N = 2 SE +/- 45385.58, N = 2 1541676.36 1492979.46 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Mutex b a 3M 6M 9M 12M 15M SE +/- 2864.48, N = 2 SE +/- 23940.47, N = 2 15192892.59 15147444.51 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Crypto b a 11K 22K 33K 44K 55K SE +/- 18.13, N = 2 SE +/- 3.65, N = 2 50243.48 50240.09 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Malloc a b 20M 40M 60M 80M 100M SE +/- 129754.02, N = 2 SE +/- 83929.32, N = 2 99373474.31 99251227.28 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Forking b a 20K 40K 60K 80K 100K SE +/- 421.24, N = 2 SE +/- 469.20, N = 2 89966.29 89918.21 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Pthread a b 30K 60K 90K 120K 150K SE +/- 971.78, N = 2 SE +/- 102.07, N = 2 136846.01 136709.81 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: IO_uring a b 300K 600K 900K 1200K 1500K SE +/- 22482.34, N = 2 SE +/- 5229.94, N = 2 1529665.98 1503623.79 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: SENDFILE b a 130K 260K 390K 520K 650K SE +/- 243.97, N = 2 SE +/- 6799.74, N = 2 598173.56 582724.63 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: CPU Cache b a 400K 800K 1200K 1600K 2000K SE +/- 234949.06, N = 2 SE +/- 31294.95, N = 2 1885833.11 1537111.20 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: CPU Stress b a 14K 28K 42K 56K 70K SE +/- 38.95, N = 2 SE +/- 12.73, N = 2 64118.87 64111.11 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Semaphores a b 13M 26M 39M 52M 65M SE +/- 2077286.42, N = 2 SE +/- 466593.23, N = 2 62126446.21 61651485.43 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Matrix Math a b 30K 60K 90K 120K 150K SE +/- 2867.57, N = 2 SE +/- 332.46, N = 2 160653.44 156668.43 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Vector Math b a 30K 60K 90K 120K 150K SE +/- 5.98, N = 2 SE +/- 47.16, N = 2 151431.15 151386.31 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Function Call b a 5K 10K 15K 20K 25K SE +/- 74.09, N = 2 SE +/- 80.03, N = 2 22106.49 22028.03 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: x86_64 RdRand b a 70K 140K 210K 280K 350K SE +/- 1.14, N = 2 SE +/- 2.35, N = 2 331423.04 331416.52 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Floating Point b a 2K 4K 6K 8K 10K SE +/- 17.77, N = 2 SE +/- 1.07, N = 2 10601.10 10587.48 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Matrix 3D Math b a 2K 4K 6K 8K 10K SE +/- 4.08, N = 2 SE +/- 34.45, N = 2 9605.30 9599.93 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Memory Copying b a 1500 3000 4500 6000 7500 SE +/- 11.04, N = 2 SE +/- 8.71, N = 2 7180.43 7176.19 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Vector Shuffle a b 40K 80K 120K 160K 200K SE +/- 6.63, N = 2 SE +/- 6.04, N = 2 167204.21 167202.07 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Socket Activity b a 5K 10K 15K 20K 25K SE +/- 267.39, N = 2 SE +/- 72.57, N = 2 25282.31 24947.14 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Wide Vector Math b a 400K 800K 1200K 1600K 2000K SE +/- 4139.63, N = 2 SE +/- 918.08, N = 2 1750003.43 1745029.27 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Context Switching a b 600K 1200K 1800K 2400K 3000K SE +/- 678.57, N = 2 SE +/- 604.17, N = 2 2572801.75 2571092.69 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Fused Multiply-Add a b 7M 14M 21M 28M 35M SE +/- 137631.48, N = 2 SE +/- 285.63, N = 2 34197705.63 34050669.23 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Vector Floating Point a b 12K 24K 36K 48K 60K SE +/- 30.71, N = 2 SE +/- 4.11, N = 2 58243.38 58232.70 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Glibc C String Functions b a 6M 12M 18M 24M 30M SE +/- 69329.81, N = 2 SE +/- 150617.25, N = 2 26125214.84 26067360.60 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Glibc Qsort Data Sorting b a 150 300 450 600 750 SE +/- 0.46, N = 2 SE +/- 0.40, N = 2 696.92 696.65 1. (CXX) g++ options: -O2 -std=gnu99 -lc
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: System V Message Passing b a 1.3M 2.6M 3.9M 5.2M 6.5M SE +/- 9802.94, N = 2 SE +/- 7174.98, N = 2 5854201.78 5852281.71 1. (CXX) g++ options: -O2 -std=gnu99 -lc
VVenC VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Fast b a 1.3313 2.6626 3.9939 5.3252 6.6565 SE +/- 0.015, N = 2 SE +/- 0.074, N = 2 5.917 5.842 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Faster a b 3 6 9 12 15 SE +/- 0.00, N = 2 SE +/- 0.03, N = 2 11.02 10.99 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Fast b a 4 8 12 16 20 SE +/- 0.02, N = 2 SE +/- 0.17, N = 2 16.25 16.10 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Faster a b 7 14 21 28 35 SE +/- 0.06, N = 2 SE +/- 0.04, N = 2 30.95 30.93 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
HeFFTe - Highly Efficient FFT for Exascale HeFFTe is the Highly Efficient FFT for Exascale software developed as part of the Exascale Computing Project. This test profile uses HeFFTe's built-in speed benchmarks under a variety of configuration options and currently catering to CPU/processor testing. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOP/s, More Is Better HeFFTe - Highly Efficient FFT for Exascale 2.3 Test: c2c - Backend: FFTW - Precision: float - X Y Z: 128 a b 30 60 90 120 150 SE +/- 0.77, N = 2 SE +/- 0.61, N = 2 131.66 130.98 1. (CXX) g++ options: -O3
libxsmm Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 128 b a 300 600 900 1200 1500 SE +/- 1.10, N = 2 SE +/- 4.60, N = 2 1225.0 1211.8 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 256 a b 200 400 600 800 1000 SE +/- 0.65, N = 2 SE +/- 5.75, N = 2 879.6 758.9 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 32 b a 100 200 300 400 500 SE +/- 0.15, N = 2 SE +/- 0.25, N = 2 444.6 440.0 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 64 b a 200 400 600 800 1000 SE +/- 0.20, N = 2 SE +/- 1.05, N = 2 839.9 833.8 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -msse4.2
Neural Magic DeepSparse OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream b a 8 16 24 32 40 SE +/- 0.12, N = 2 SE +/- 0.06, N = 2 34.55 34.53
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream b a 200 400 600 800 1000 SE +/- 1.01, N = 2 SE +/- 0.57, N = 2 1075.96 1074.82
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream b a 90 180 270 360 450 SE +/- 0.12, N = 2 SE +/- 1.01, N = 2 391.91 390.91
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream b a 30 60 90 120 150 SE +/- 0.22, N = 2 SE +/- 0.05, N = 2 122.04 121.69
Laghos Laghos (LAGrangian High-Order Solver) is a miniapp that solves the time-dependent Euler equations of compressible gas dynamics in a moving Lagrangian frame using unstructured high-order finite element spatial discretization and explicit high-order time-stepping. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Major Kernels Total Rate, More Is Better Laghos 3.1 Test: Triple Point Problem a b 40 80 120 160 200 SE +/- 0.13, N = 2 SE +/- 0.02, N = 2 177.78 176.92 1. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi
OpenBenchmarking.org Major Kernels Total Rate, More Is Better Laghos 3.1 Test: Sedov Blast Wave, ube_922_hex.mesh b a 50 100 150 200 250 SE +/- 0.18, N = 2 SE +/- 0.24, N = 2 217.19 216.86 1. (CXX) g++ options: -O3 -std=c++11 -lmfem -lHYPRE -lmetis -lrt -lmpi_cxx -lmpi
srsRAN Project srsRAN Project is a complete ORAN-native 5G RAN solution created by Software Radio Systems (SRS). The srsRAN Project radio suite was formerly known as srsLTE and can be used for building your own software-defined radio (SDR) 4G/5G mobile network. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mbps, More Is Better srsRAN Project 23.5 Test: Downlink Processor Benchmark b a 150 300 450 600 750 SE +/- 1.60, N = 2 SE +/- 5.15, N = 2 710.9 705.8 1. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
OpenBenchmarking.org Mbps, More Is Better srsRAN Project 23.5 Test: PUSCH Processor Benchmark, Throughput Total b a 1200 2400 3600 4800 6000 SE +/- 95.40, N = 2 SE +/- 143.30, N = 2 5543.7 5372.9 1. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
OpenBenchmarking.org Mbps, More Is Better srsRAN Project 23.5 Test: PUSCH Processor Benchmark, Throughput Thread a b 50 100 150 200 250 SE +/- 3.55, N = 2 SE +/- 0.10, N = 2 240.4 236.3 1. (CXX) g++ options: -march=native -mfma -O3 -fno-trapping-math -fno-math-errno -lgtest
Palabos The Palabos library is a framework for general purpose Computational Fluid Dynamics (CFD). Palabos uses a kernel based on the Lattice Boltzmann method. This test profile uses the Palabos MPI-based Cavity3D benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mega Site Updates Per Second, More Is Better Palabos 2.3 Grid Size: 100 a b 50 100 150 200 250 SE +/- 0.02, N = 2 SE +/- 0.34, N = 2 235.19 234.87 1. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm
OpenBenchmarking.org Mega Site Updates Per Second, More Is Better Palabos 2.3 Grid Size: 400 a b 60 120 180 240 300 SE +/- 0.49, N = 2 SE +/- 1.54, N = 2 287.27 285.76 1. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm
OpenBenchmarking.org Mega Site Updates Per Second, More Is Better Palabos 2.3 Grid Size: 500 b a 70 140 210 280 350 SE +/- 1.17, N = 2 SE +/- 1.63, N = 2 300.86 300.28 1. (CXX) g++ options: -std=c++17 -pedantic -O3 -rdynamic -lcrypto -lcurl -lsz -lz -ldl -lm
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 a b 500K 1000K 1500K 2000K 2500K SE +/- 6000.63, N = 2 SE +/- 3990.38, N = 2 2285996.17 2227152.02 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:10 a b 500K 1000K 1500K 2000K 2500K SE +/- 13610.76, N = 2 SE +/- 4548.93, N = 2 2316281.26 2293467.62 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 a b 500K 1000K 1500K 2000K 2500K SE +/- 114392.77, N = 2 SE +/- 12975.09, N = 2 2447092.01 2304730.19 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 32 b a 120M 240M 360M 480M 600M SE +/- 605000.00, N = 2 SE +/- 2065000.00, N = 2 558655000 557945000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 57 b a 200M 400M 600M 800M 1000M SE +/- 695000.00, N = 2 SE +/- 14365000.00, N = 2 862195000 848435000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 32 b a 200M 400M 600M 800M 1000M SE +/- 85000.00, N = 2 SE +/- 25000.00, N = 2 847675000 847085000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 57 a b 300M 600M 900M 1200M 1500M SE +/- 300000.00, N = 2 SE +/- 4400000.00, N = 2 1328100000 1323900000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 32 a b 300M 600M 900M 1200M 1500M SE +/- 300000.00, N = 2 SE +/- 450000.00, N = 2 1577300000 1576850000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 57 b a 400M 800M 1200M 1600M 2000M SE +/- 900000.00, N = 2 SE +/- 550000.00, N = 2 1733700000 1728850000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 16 - Buffer Length: 256 - Filter Length: 512 b a 50M 100M 150M 200M 250M SE +/- 3170000.00, N = 2 SE +/- 1950000.00, N = 2 248820000 243940000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 32 - Buffer Length: 256 - Filter Length: 512 a b 80M 160M 240M 320M 400M SE +/- 1955000.00, N = 2 SE +/- 4920000.00, N = 2 383555000 378650000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 64 - Buffer Length: 256 - Filter Length: 512 a b 110M 220M 330M 440M 550M SE +/- 385000.00, N = 2 SE +/- 800000.00, N = 2 513135000 513040000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
BRL-CAD BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.36 VGR Performance Metric a 100K 200K 300K 400K 500K SE +/- 3768.50, N = 2 466686 1. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream b a 100 200 300 400 500 SE +/- 2.44, N = 2 SE +/- 0.42, N = 2 460.76 460.78
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream b a 9 18 27 36 45 SE +/- 0.01, N = 2 SE +/- 0.11, N = 2 40.81 40.91
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream b a 30 60 90 120 150 SE +/- 0.22, N = 2 SE +/- 0.05, N = 2 131.07 131.45
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream b a 100 200 300 400 500 SE +/- 0.20, N = 2 SE +/- 1.46, N = 2 460.67 468.80
OpenFOAM OpenFOAM is the leading free, open-source software for computational fluid dynamics (CFD). This test profile currently uses the drivaerFastback test case for analyzing automotive aerodynamics or alternatively the older motorBike input. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Mesh Time b a 7 14 21 28 35 SE +/- 0.05, N = 2 SE +/- 0.02, N = 2 27.95 27.97 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Small Mesh Size - Execution Time b a 15 30 45 60 75 SE +/- 0.11, N = 2 SE +/- 0.09, N = 2 67.56 67.71 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Mesh Time a b 30 60 90 120 150 SE +/- 0.01, N = 2 SE +/- 0.08, N = 2 144.70 144.94 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
OpenBenchmarking.org Seconds, Fewer Is Better OpenFOAM 10 Input: drivaerFastback, Medium Mesh Size - Execution Time b a 130 260 390 520 650 SE +/- 0.03, N = 2 SE +/- 0.42, N = 2 615.46 615.99 1. (CXX) g++ options: -std=c++14 -m64 -O3 -ftemplate-depth-100 -fPIC -fuse-ld=bfd -Xlinker --add-needed --no-as-needed -lfiniteVolume -lmeshTools -lparallel -llagrangian -lregionModels -lgenericPatchFields -lOpenFOAM -ldl -lm
Blender OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: BMW27 - Compute: CPU-Only a b 11 22 33 44 55 SE +/- 0.02, N = 2 SE +/- 0.08, N = 2 47.15 47.22
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Classroom - Compute: CPU-Only b a 30 60 90 120 150 SE +/- 0.13, N = 2 SE +/- 0.05, N = 2 127.76 127.78
OpenBenchmarking.org Seconds, Fewer Is Better Blender 3.6 Blend File: Barbershop - Compute: CPU-Only a b 110 220 330 440 550 SE +/- 0.22, N = 2 SE +/- 0.42, N = 2 493.45 493.61
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x2b0000c0Java Notes: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 30 July 2023 19:35 by user phoronix.
b Processor: Intel Xeon Gold 6421N @ 3.60GHz (32 Cores / 64 Threads), Motherboard: Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS), Chipset: Intel Device 1bce, Memory: 512GB, Disk: 3 x 3841GB Micron_9300_MTFDHAL3T8TDP, Graphics: ASPEED, Monitor: VGA HDMI, Network: 4 x Intel E810-C for QSFP
OS: Ubuntu 22.04, Kernel: 5.15.0-47-generic (x86_64), Desktop: GNOME Shell 42.4, Display Server: X Server 1.21.1.3, Vulkan: 1.2.204, Compiler: GCC 11.2.0, File-System: ext4, Screen Resolution: 1600x1200
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x2b0000c0Java Notes: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 31 July 2023 05:12 by user phoronix.