eps Tests for a future article. 2 x AMD EPYC 9684X 96-Core testing with a AMD Titanite_4G (RTI1007B BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2312240-NE-EPS17737430 a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113eJava Notes: OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10)Python Notes: Python 3.11.6Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
b Processor: 2 x AMD EPYC 9684X 96-Core @ 2.55GHz (192 Cores / 384 Threads), Motherboard: AMD Titanite_4G (RTI1007B BIOS), Chipset: AMD Device 14a4, Memory: 1520GB, Disk: 3201GB Micron_7450_MTFDKCB3T2TFS, Graphics: ASPEED, Network: Broadcom NetXtreme BCM5720 PCIe
OS: Ubuntu 23.10, Kernel: 6.5.0-13-generic (x86_64), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 800x600
eps OpenBenchmarking.org Phoronix Test Suite 2 x AMD EPYC 9684X 96-Core @ 2.55GHz (192 Cores / 384 Threads) AMD Titanite_4G (RTI1007B BIOS) AMD Device 14a4 1520GB 3201GB Micron_7450_MTFDKCB3T2TFS ASPEED Broadcom NetXtreme BCM5720 PCIe Ubuntu 23.10 6.5.0-13-generic (x86_64) GCC 13.2.0 ext4 800x600 Processor Motherboard Chipset Memory Disk Graphics Network OS Kernel Compiler File-System Screen Resolution Eps Performance System Logs - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e - OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10) - Python 3.11.6 - gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
a vs. b Comparison Phoronix Test Suite Baseline +7.7% +7.7% +15.4% +15.4% +23.1% +23.1% +30.8% +30.8% 7.7% 5.3% 4.3% 4.3% 2.7% 2.1% 4.6% 11.9% 30.8% 3.4% 11.7% 2.7% 9.7% 4.1% 9.6% 10.5% 6.6% 2.7% 2.5% 7.4% 6.7% 2.4% 2% 9.6% 3.1% 2.3% 13.1% CPU - 256 - ResNet-152 CPU - 1 - Efficientnet_v2_l Preset 13 - Bosphorus 4K Preset 12 - Bosphorus 4K Q.1.C.E.5 3.7% CPU - 256 - ResNet-50 3.3% CPU - 1 - ResNet-152 BLAS 1 - Q01 2.9% 1 - Q04 1 - Q05 1 - Q06 1 - Q07 1 - Q09 3.3% 1 - Q11 1 - Q12 4.2% 1 - Q13 9.6% 1 - Q14 7.1% 1 - Q15 3.4% 1 - Q16 9.9% 1 - Q17 1 - Q18 1 - Q19 8.5% 1 - Q22 5.9% 10 - Q01 10 - Q03 2.3% 10 - Q04 10 - Q05 14.7% 10 - Q06 10 - Q08 10 - Q09 2.8% 10 - Q10 10 - Q11 3.4% 10 - Q13 7.6% 10 - Q14 10 - Q15 10 - Q18 10 - Q19 50 - Q01 7.2% 50 - Q03 13.4% 50 - Q04 3.9% 50 - Q05 4.6% 50 - Q07 4% 50 - Q11 50 - Q12 50 - Q13 2.2% 50 - Q15 50 - Q16 5.3% 50 - Q18 50 - Q19 15.6% 50 - Q21 PyTorch PyTorch SVT-AV1 SVT-AV1 WebP2 Image Encode PyTorch PyTorch LeelaChessZero Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H Apache Spark TPC-H a b
eps lczero: BLAS lczero: Eigen xmrig: KawPow - 1M xmrig: Monero - 1M xmrig: Wownero - 1M xmrig: GhostRider - 1M xmrig: CryptoNight-Heavy - 1M xmrig: CryptoNight-Femto UPX2 - 1M java-scimark2: Composite java-scimark2: Monte Carlo java-scimark2: Fast Fourier Transform java-scimark2: Sparse Matrix Multiply java-scimark2: Dense LU Matrix Factorization java-scimark2: Jacobi Successive Over-Relaxation webp2: Default webp2: Quality 75, Compression Effort 7 webp2: Quality 95, Compression Effort 7 webp2: Quality 100, Compression Effort 5 webp2: Quality 100, Lossless Compression svt-av1: Preset 4 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 13 - Bosphorus 4K svt-av1: Preset 4 - Bosphorus 1080p svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 12 - Bosphorus 1080p svt-av1: Preset 13 - Bosphorus 1080p openssl: SHA256 openssl: SHA512 openssl: RSA4096 openssl: RSA4096 spark-tpch: 1 - Geometric Mean Of All Queries spark-tpch: 1 - Q01 spark-tpch: 1 - Q02 spark-tpch: 1 - Q03 spark-tpch: 1 - Q04 spark-tpch: 1 - Q05 spark-tpch: 1 - Q06 spark-tpch: 1 - Q07 spark-tpch: 1 - Q08 spark-tpch: 1 - Q09 spark-tpch: 1 - Q10 spark-tpch: 1 - Q11 spark-tpch: 1 - Q12 spark-tpch: 1 - Q13 spark-tpch: 1 - Q14 spark-tpch: 1 - Q15 spark-tpch: 1 - Q16 spark-tpch: 1 - Q17 spark-tpch: 1 - Q18 spark-tpch: 1 - Q19 spark-tpch: 1 - Q20 spark-tpch: 1 - Q21 spark-tpch: 1 - Q22 spark-tpch: 10 - Geometric Mean Of All Queries spark-tpch: 10 - Q01 spark-tpch: 10 - Q02 spark-tpch: 10 - Q03 spark-tpch: 10 - Q04 spark-tpch: 10 - Q05 spark-tpch: 10 - Q06 spark-tpch: 10 - Q07 spark-tpch: 10 - Q08 spark-tpch: 10 - Q09 spark-tpch: 10 - Q10 spark-tpch: 10 - Q11 spark-tpch: 10 - Q12 spark-tpch: 10 - Q13 spark-tpch: 10 - Q14 spark-tpch: 10 - Q15 spark-tpch: 10 - Q16 spark-tpch: 10 - Q17 spark-tpch: 10 - Q18 spark-tpch: 10 - Q19 spark-tpch: 10 - Q20 spark-tpch: 10 - Q21 spark-tpch: 10 - Q22 spark-tpch: 50 - Geometric Mean Of All Queries spark-tpch: 50 - Q01 spark-tpch: 50 - Q02 spark-tpch: 50 - Q03 spark-tpch: 50 - Q04 spark-tpch: 50 - Q05 spark-tpch: 50 - Q06 spark-tpch: 50 - Q07 spark-tpch: 50 - Q08 spark-tpch: 50 - Q09 spark-tpch: 50 - Q10 spark-tpch: 50 - Q11 spark-tpch: 50 - Q12 spark-tpch: 50 - Q13 spark-tpch: 50 - Q14 spark-tpch: 50 - Q15 spark-tpch: 50 - Q16 spark-tpch: 50 - Q17 spark-tpch: 50 - Q18 spark-tpch: 50 - Q19 spark-tpch: 50 - Q20 spark-tpch: 50 - Q21 spark-tpch: 50 - Q22 pytorch: CPU - 1 - ResNet-50 pytorch: CPU - 1 - ResNet-152 pytorch: CPU - 16 - ResNet-50 pytorch: CPU - 32 - ResNet-50 pytorch: CPU - 16 - ResNet-152 pytorch: CPU - 256 - ResNet-50 pytorch: CPU - 32 - ResNet-152 pytorch: CPU - 256 - ResNet-152 pytorch: CPU - 1 - Efficientnet_v2_l pytorch: CPU - 16 - Efficientnet_v2_l pytorch: CPU - 32 - Efficientnet_v2_l pytorch: CPU - 256 - Efficientnet_v2_l deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream a b 853 704 123558.6 123352.8 131141.9 31859.7 123041.6 123199.0 3984.62 1631.42 420.74 2809.01 13358.53 1703.42 9.48 0.83 0.45 6.51 0.11 8.248 86.434 178.910 176.670 21.424 165.104 571.875 635.810 281869895760 91630925473 98622.0 3244390.3 2.44964916 4.32006081 2.06179071 3.86442184 3.92525745 4.13122161 0.46822915 4.01044806 2.65584644 5.70969407 3.81359665 1.27338135 2.17542648 1.58815936 2.06485331 2.50185966 1.38147259 2.95993924 5.62853845 0.79092395 3.05739617 9.64531231 1.00769047 10.72150208 7.58889151 7.43104283 13.97308763 12.34571203 16.44365629 2.05104745 14.65200933 15.51824761 21.90670204 15.17488098 8.00292349 9.94400438 7.37728373 7.07622369 5.84138076 6.87131294 12.77044550 18.46971194 6.20677837 11.43560823 32.90715027 6.05411895 19.58745807 12.00795619 14.25487200 26.18788719 20.99598312 29.83627891 5.90309207 24.85711161 26.73535283 36.66458511 24.36003748 13.58028253 19.40537771 12.75901413 12.70455011 9.77733866 14.21570397 24.30927912 34.51305643 10.45287259 20.79876137 87.89528910 10.69325638 23.57 10.16 21.16 21.00 8.93 21.29 8.90 8.96 6.40 2.32 2.32 2.32 132.6580 715.0362 48.4476 20.6345 5540.6268 17.3023 190.7999 5.2377 1758.5931 54.5064 209.7998 4.7637 17108.4634 5.5955 804.1784 1.2413 784.5178 122.0312 211.7448 4.7188 156.4159 607.5664 32.0229 31.2154 1761.4041 54.4097 208.1200 4.8022 796.0713 120.2065 212.0955 4.7126 1136.7105 84.2519 224.5798 4.4503 248.5770 383.2004 65.2070 15.3183 2608.0090 36.7508 68.2655 14.6422 132.0485 719.2814 48.4917 20.6157 871 715 123411.1 122971 131613.6 31728.9 123777.7 122070.3 3996.76 1632.45 421.91 2792.09 13434.09 1703.25 9.63 0.82 0.45 6.28 0.11 8.208 86.841 186.609 184.347 21.313 162.561 569.955 639.088 282211175400 91835961470 98528.8 3243345.2 2.49517747 4.44657946 2.08224201 3.86610818 3.75427246 3.69217634 0.35801557 3.87790275 2.60907817 5.89775848 3.81245542 1.13998687 2.26641607 1.74074161 2.21146965 2.58714175 1.51779914 2.88348198 5.13171148 0.85797596 3.05001688 9.55909538 1.06679213 10.65793942 7.28826714 7.39245987 14.28984642 11.26242161 18.86129189 1.8559593 14.89605904 14.55769348 22.52552795 14.77719498 8.27814293 10.03829002 7.94083786 6.90602303 5.43870592 6.95270681 13.01374149 17.31370163 6.0604167 11.53966141 32.70154953 6.04430914 19.56475658 12.86835003 14.53046799 29.68590546 21.8167572 31.20059776 5.88382483 25.86055183 26.62909508 36.66526794 24.68585587 13.31200027 17.70001793 13.04496479 12.56767082 9.48287773 14.97535801 24.55788994 33.74198151 12.08592796 21.05384445 77.70675659 10.87410069 23.12 10.43 21.57 21.09 8.97 20.60 8.98 9.65 6.74 2.34 2.32 2.28 132.2719 717.5936 48.3264 20.686 5540.517 17.3019 191.0971 5.2296 1756.5569 54.5546 206.969 4.829 17047.639 5.6159 804.7528 1.2404 786.8905 121.6067 211.7729 4.718 156.4283 607.5735 31.9795 31.2575 1759.0746 54.4859 209.1955 4.7775 797.4124 120.0373 212.1305 4.7117 1136.6439 84.2491 225.4047 4.4341 249.4983 382.5561 65.0079 15.3653 2596.0961 36.9146 68.2636 14.6426 132.4219 717.9791 48.332 20.6837 OpenBenchmarking.org
Xmrig Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: KawPow - Hash Count: 1M a b 30K 60K 90K 120K 150K SE +/- 87.00, N = 3 123558.6 123411.1 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: Monero - Hash Count: 1M a b 30K 60K 90K 120K 150K SE +/- 404.54, N = 3 123352.8 122971.0 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: Wownero - Hash Count: 1M a b 30K 60K 90K 120K 150K SE +/- 621.69, N = 3 131141.9 131613.6 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: GhostRider - Hash Count: 1M a b 7K 14K 21K 28K 35K SE +/- 24.02, N = 3 31859.7 31728.9 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: CryptoNight-Heavy - Hash Count: 1M a b 30K 60K 90K 120K 150K SE +/- 33.09, N = 3 123041.6 123777.7 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: CryptoNight-Femto UPX2 - Hash Count: 1M a b 30K 60K 90K 120K 150K SE +/- 220.87, N = 3 123199.0 122070.3 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
Java SciMark This test runs the Java version of SciMark 2, which is a benchmark for scientific and numerical computing developed by programmers at the National Institute of Standards and Technology. This benchmark is made up of Fast Foruier Transform, Jacobi Successive Over-relaxation, Monte Carlo, Sparse Matrix Multiply, and dense LU matrix factorization benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mflops, More Is Better Java SciMark 2.2 Computational Test: Composite a b 900 1800 2700 3600 4500 SE +/- 6.24, N = 3 3984.62 3996.76
WebP2 Image Encode This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Default a b 3 6 9 12 15 SE +/- 0.08, N = 3 9.48 9.63 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
SVT-AV1 This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 4 - Input: Bosphorus 4K a b 2 4 6 8 10 SE +/- 0.041, N = 3 8.248 8.208 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 8 - Input: Bosphorus 4K a b 20 40 60 80 100 SE +/- 0.17, N = 3 86.43 86.84 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 12 - Input: Bosphorus 4K a b 40 80 120 160 200 SE +/- 1.43, N = 3 178.91 186.61 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 13 - Input: Bosphorus 4K a b 40 80 120 160 200 SE +/- 1.61, N = 15 176.67 184.35 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 4 - Input: Bosphorus 1080p a b 5 10 15 20 25 SE +/- 0.13, N = 3 21.42 21.31 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 8 - Input: Bosphorus 1080p a b 40 80 120 160 200 SE +/- 1.87, N = 3 165.10 162.56 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 12 - Input: Bosphorus 1080p a b 120 240 360 480 600 SE +/- 1.39, N = 3 571.88 569.96 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 13 - Input: Bosphorus 1080p a b 140 280 420 560 700 SE +/- 8.75, N = 3 635.81 639.09 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenSSL OpenSSL is an open-source toolkit that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. The system/openssl test profiles relies on benchmarking the system/OS-supplied openssl binary rather than the pts/openssl test profile that uses the locally-built OpenSSL for benchmarking. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: SHA256 a b 60000M 120000M 180000M 240000M 300000M SE +/- 548972949.20, N = 3 281869895760 282211175400 1. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)
OpenBenchmarking.org byte/s, More Is Better OpenSSL Algorithm: SHA512 a b 20000M 40000M 60000M 80000M 100000M SE +/- 191332047.54, N = 3 91630925473 91835961470 1. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)
OpenBenchmarking.org sign/s, More Is Better OpenSSL Algorithm: RSA4096 a b 20K 40K 60K 80K 100K SE +/- 53.45, N = 3 98622.0 98528.8 1. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)
OpenBenchmarking.org verify/s, More Is Better OpenSSL Algorithm: RSA4096 a b 700K 1400K 2100K 2800K 3500K SE +/- 1292.47, N = 3 3244390.3 3243345.2 1. OpenSSL 3.0.10 1 Aug 2023 (Library: OpenSSL 3.0.10 1 Aug 2023)
Algorithm: ChaCha20
a: The test run did not produce a result.
b: The test run did not produce a result.
Algorithm: AES-128-GCM
a: The test run did not produce a result. E: 4097A6F7B77F0000:error:1C800066:Provider routines:ossl_gcm_stream_update:cipher operation failed:../providers/implementations/ciphers/ciphercommon_gcm.c:320:
b: The test run did not produce a result. E: 40B7EFA3BE7F0000:error:1C800066:Provider routines:ossl_gcm_stream_update:cipher operation failed:../providers/implementations/ciphers/ciphercommon_gcm.c:320:
Algorithm: AES-256-GCM
a: The test run did not produce a result. E: 40270E64087F0000:error:1C800066:Provider routines:ossl_gcm_stream_update:cipher operation failed:../providers/implementations/ciphers/ciphercommon_gcm.c:320:
b: The test run did not produce a result. E: 408712FE017F0000:error:1C800066:Provider routines:ossl_gcm_stream_update:cipher operation failed:../providers/implementations/ciphers/ciphercommon_gcm.c:320:
Algorithm: ChaCha20-Poly1305
a: The test run did not produce a result.
b: The test run did not produce a result.
Apache Spark TPC-H This is a benchmark of Apache Spark using TPC-H data-set. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmarks the Apache Spark in a single-system configuration using spark-submit. The test makes use of https://github.com/ssavvides/tpch-spark/ for facilitating the TPC-H benchmark. Learn more via the OpenBenchmarking.org test page.
PyTorch OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 a b 6 12 18 24 30 SE +/- 0.19, N = 15 23.57 23.12 MIN: 11.38 / MAX: 25.62 MIN: 12.17 / MAX: 24.33
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 a b 3 6 9 12 15 SE +/- 0.08, N = 3 10.16 10.43 MIN: 4.56 / MAX: 10.94 MIN: 4.8 / MAX: 11.36
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 a b 5 10 15 20 25 SE +/- 0.25, N = 3 21.16 21.57 MIN: 12.26 / MAX: 22.24 MIN: 14.06 / MAX: 22.29
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 a b 5 10 15 20 25 SE +/- 0.20, N = 3 21.00 21.09 MIN: 11.39 / MAX: 21.87 MIN: 13.93 / MAX: 21.71
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 a b 3 6 9 12 15 SE +/- 0.10, N = 3 8.93 8.97 MIN: 4.75 / MAX: 9.39 MIN: 4.96 / MAX: 9.11
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-50 a b 5 10 15 20 25 SE +/- 0.31, N = 3 21.29 20.60 MIN: 13.22 / MAX: 22.39 MIN: 13.89 / MAX: 21.35
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 a b 3 6 9 12 15 SE +/- 0.10, N = 3 8.90 8.98 MIN: 4.8 / MAX: 9.23 MIN: 5.1 / MAX: 9.29
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: ResNet-152 a b 3 6 9 12 15 SE +/- 0.05, N = 3 8.96 9.65 MIN: 4.84 / MAX: 9.24 MIN: 4.98 / MAX: 9.85
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l a b 2 4 6 8 10 SE +/- 0.05, N = 3 6.40 6.74 MIN: 2.93 / MAX: 6.73 MIN: 3.48 / MAX: 6.89
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l a b 0.5265 1.053 1.5795 2.106 2.6325 SE +/- 0.01, N = 3 2.32 2.34 MIN: 1.77 / MAX: 2.81 MIN: 1.78 / MAX: 2.78
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l a b 0.522 1.044 1.566 2.088 2.61 SE +/- 0.01, N = 3 2.32 2.32 MIN: 1.86 / MAX: 2.8 MIN: 1.93 / MAX: 2.71
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l a b 0.522 1.044 1.566 2.088 2.61 SE +/- 0.00, N = 3 2.32 2.28 MIN: 1.83 / MAX: 2.8 MIN: 1.71 / MAX: 2.84
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.6 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream a b 1.1785 2.357 3.5355 4.714 5.8925 SE +/- 0.0015, N = 3 5.2377 5.2296
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113eJava Notes: OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10)Python Notes: Python 3.11.6Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 24 December 2023 15:26 by user phoronix.
b Processor: 2 x AMD EPYC 9684X 96-Core @ 2.55GHz (192 Cores / 384 Threads), Motherboard: AMD Titanite_4G (RTI1007B BIOS), Chipset: AMD Device 14a4, Memory: 1520GB, Disk: 3201GB Micron_7450_MTFDKCB3T2TFS, Graphics: ASPEED, Network: Broadcom NetXtreme BCM5720 PCIe
OS: Ubuntu 23.10, Kernel: 6.5.0-13-generic (x86_64), Compiler: GCC 13.2.0, File-System: ext4, Screen Resolution: 800x600
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113eJava Notes: OpenJDK Runtime Environment (build 11.0.21+9-post-Ubuntu-0ubuntu123.10)Python Notes: Python 3.11.6Security Notes: gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 25 December 2023 00:19 by user phoronix.