8490h april

2 x Intel Xeon Platinum 8490H testing with a Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2304136-NE-8490HAPRI45&grw&rdt.

8490h aprilProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen Resolutionabcde2 x Intel Xeon Platinum 8490H @ 3.50GHz (120 Cores / 240 Threads)Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS)Intel Device 1bce16 x 64 GB 4800MT/s Samsung M321R8GA0BB0-CQKEG2 x 1920GB SAMSUNG MZWLJ1T9HBJR-00007 + 960GB INTEL SSDSC2KG96ASPEEDVGA HDMI4 x Intel E810-C for QSFP + 2 x Intel X710 for 10GBASE-TUbuntu 22.046.2.0-060200rc7daily20230208-generic (x86_64)GNOME Shell 42.2X Server 1.21.1.31.2.204GCC 11.3.0 + Clang 14.0.0-1ubuntu1ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x2b0000c0Python Details- Python 3.10.6Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

8490h apriltensorflow: CPU - 16 - AlexNettensorflow: CPU - 32 - AlexNettensorflow: CPU - 64 - AlexNettensorflow: CPU - 256 - AlexNettensorflow: CPU - 512 - AlexNettensorflow: CPU - 16 - GoogLeNettensorflow: CPU - 16 - ResNet-50tensorflow: CPU - 32 - GoogLeNettensorflow: CPU - 32 - ResNet-50tensorflow: CPU - 64 - GoogLeNettensorflow: CPU - 64 - ResNet-50tensorflow: CPU - 256 - GoogLeNettensorflow: CPU - 256 - ResNet-50tensorflow: CPU - 512 - GoogLeNettensorflow: CPU - 512 - ResNet-50onednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUblender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlyvvenc: Bosphorus 4K - Fastvvenc: Bosphorus 4K - Fastervvenc: Bosphorus 1080p - Fastvvenc: Bosphorus 1080p - Fastersrsran: Downlink Processor Benchmarksrsran: PUSCH Processor Benchmark, Throughput Totalsrsran: PUSCH Processor Benchmark, Throughput Threadnginx: 500apache: 500apache: 1000abcde372.88531.68743.731091.421227.69173.6464.28257.3383.13348103.48444.17130.44472.26135.883.567592.497575.193790.8724765.884723.167790.40871114.26670.7244130.239960.4346580.2289711216.99881.2321304.570.2287410.4703360.464269873.1471205.29861.14814.0336.519.36147.2548.816.30810.06517.14728.66326.57122.429.9250533.3780395.59386.55556.34741.871077.571231.85185.7864.31267.0284.42342.26102.21442.93128.89465.31134.343.050002.676994.624780.9784285.387343.046380.40298314.62840.7187460.3144270.4100290.2251971155.77840.9561232.870.2231420.4513930.466045832.3381184.14878.48914.2036.6619.70147.7347.846.33210.05517.39630.988324.26898.629.8246156.1183834.81370.67557.68751.671063.471214.36176.8463.78265.0884.98334.11104.52437.97127.52467.33135.223.635852.528485.307691.153325.420712.913810.40567714.54440.7164190.4335230.3974350.2193411209.39852.5771081.70.217420.4578930.457996844.3581228.77904.26814.2136.7919.94146.5947.656.3149.96717.24430.211326.76547.429.7246619.5477777.03391.88536.63739.021071.621225.54184.863.97249.7484.17346.11103.14441.29128.8462.37134.763.504852.374794.875480.9813614.977182.837540.40841614.48910.7113060.2961520.3919570.2257421182.32731.0951205.380.219490.440410.462589832.5741184.12888.73214.0436.3120.13147.1847.736.4439.95617.21127.619320.87079.528.8247581.6484694.76386.34564.79745.331062.061230.3185.2264.96270.3183.45346.2102.87441.44128.23469.14133.93.446772.808695.347550.9893085.5583.021880.40032514.22120.7122480.3055030.4137350.2193481120.64848.6521200.190.222020.4462320.453885845.7261112.04818.43814.336.3619.54148.1147.436.38810.06716.78930.369324.16774.528.9248416.8585357.84OpenBenchmarking.org

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNetabcde90180270360450SE +/- 3.12, N = 3372.88386.55370.67391.88386.34

TensorFlow

Device: CPU - Batch Size: 32 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: AlexNetabcde120240360480600SE +/- 5.22, N = 6531.68556.34557.68536.63564.79

TensorFlow

Device: CPU - Batch Size: 64 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: AlexNetabcde160320480640800SE +/- 6.00, N = 3743.73741.87751.67739.02745.33

TensorFlow

Device: CPU - Batch Size: 256 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: AlexNetabcde2004006008001000SE +/- 5.13, N = 31091.421077.571063.471071.621062.06

TensorFlow

Device: CPU - Batch Size: 512 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: AlexNetabcde30060090012001500SE +/- 2.69, N = 31227.691231.851214.361225.541230.30

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNetabcde4080120160200SE +/- 1.60, N = 3173.64185.78176.84184.80185.22

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50abcde1428425670SE +/- 0.45, N = 364.2864.3163.7863.9764.96

TensorFlow

Device: CPU - Batch Size: 32 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: GoogLeNetabcde60120180240300SE +/- 0.97, N = 3257.33267.02265.08249.74270.31

TensorFlow

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: ResNet-50abcde20406080100SE +/- 0.33, N = 383.1384.4284.9884.1783.45

TensorFlow

Device: CPU - Batch Size: 64 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: GoogLeNetabcde80160240320400SE +/- 2.89, N = 3348.00342.26334.11346.11346.20

TensorFlow

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: ResNet-50abcde20406080100SE +/- 0.44, N = 3103.48102.21104.52103.14102.87

TensorFlow

Device: CPU - Batch Size: 256 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: GoogLeNetabcde100200300400500SE +/- 3.52, N = 3444.17442.93437.97441.29441.44

TensorFlow

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-50abcde306090120150SE +/- 0.05, N = 3130.44128.89127.52128.80128.23

TensorFlow

Device: CPU - Batch Size: 512 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: GoogLeNetabcde100200300400500SE +/- 4.06, N = 3472.26465.31467.33462.37469.14

TensorFlow

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: ResNet-50abcde306090120150SE +/- 1.30, N = 3135.88134.34135.22134.76133.90

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUabcde0.81811.63622.45433.27244.0905SE +/- 0.17695, N = 153.567593.050003.635853.504853.44677MIN: 3.02MIN: 1.6MIN: 3.11MIN: 2.9MIN: 3.041. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUabcde0.6321.2641.8962.5283.16SE +/- 0.03552, N = 32.497572.676992.528482.374792.80869MIN: 2.05MIN: 2.13MIN: 2.05MIN: 1.92MIN: 2.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUabcde1.20322.40643.60964.81286.016SE +/- 0.18778, N = 125.193794.624785.307694.875485.34755MIN: 3.98MIN: 2.46MIN: 3.99MIN: 3.78MIN: 4.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUabcde0.25950.5190.77851.0381.2975SE +/- 0.002492, N = 30.8724760.9784281.1533200.9813610.989308MIN: 0.67MIN: 0.77MIN: 0.92MIN: 0.78MIN: 0.781. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUabcde1.32412.64823.97235.29646.6205SE +/- 0.08402, N = 155.884725.387345.420714.977185.55800MIN: 4.65MIN: 3.77MIN: 4.25MIN: 3.92MIN: 4.371. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUabcde0.71281.42562.13842.85123.564SE +/- 0.03679, N = 153.167793.046382.913812.837543.02188MIN: 2.49MIN: 2.17MIN: 2.28MIN: 2.21MIN: 2.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUabcde0.0920.1840.2760.3680.46SE +/- 0.000551, N = 30.4087110.4029830.4056770.4084160.400325MIN: 0.36MIN: 0.36MIN: 0.36MIN: 0.36MIN: 0.361. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUabcde48121620SE +/- 0.05, N = 314.2714.6314.5414.4914.22MIN: 12.67MIN: 12.83MIN: 12.86MIN: 12.72MIN: 12.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUabcde0.1630.3260.4890.6520.815SE +/- 0.002808, N = 30.7244130.7187460.7164190.7113060.712248MIN: 0.66MIN: 0.66MIN: 0.66MIN: 0.65MIN: 0.661. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUabcde0.09750.1950.29250.390.4875SE +/- 0.019200, N = 150.2399600.3144270.4335230.2961520.305503MIN: 0.18MIN: 0.17MIN: 0.18MIN: 0.18MIN: 0.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUabcde0.09780.19560.29340.39120.489SE +/- 0.003330, N = 150.4346580.4100290.3974350.3919570.413735MIN: 0.33MIN: 0.31MIN: 0.32MIN: 0.32MIN: 0.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUabcde0.05150.1030.15450.2060.2575SE +/- 0.001233, N = 30.2289710.2251970.2193410.2257420.219348MIN: 0.2MIN: 0.2MIN: 0.2MIN: 0.21MIN: 0.211. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUabcde30060090012001500SE +/- 38.79, N = 121216.991155.771209.391182.321120.64MIN: 1149.61MIN: 781.24MIN: 1153.33MIN: 1123.65MIN: 1089.241. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUabcde2004006008001000SE +/- 10.27, N = 15881.23840.96852.58731.10848.65MIN: 840.73MIN: 756.78MIN: 818.16MIN: 715.12MIN: 823.741. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUabcde30060090012001500SE +/- 23.27, N = 141304.571232.871081.701205.381200.19MIN: 1219.16MIN: 1015.69MIN: 1010MIN: 1177.67MIN: 1170.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUabcde0.05150.1030.15450.2060.2575SE +/- 0.002565, N = 30.2287410.2231420.2174200.2194900.222020MIN: 0.19MIN: 0.19MIN: 0.19MIN: 0.2MIN: 0.21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUabcde0.10580.21160.31740.42320.529SE +/- 0.003398, N = 110.4703360.4513930.4578930.4404100.446232MIN: 0.36MIN: 0.34MIN: 0.35MIN: 0.35MIN: 0.351. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUabcde0.10490.20980.31470.41960.5245SE +/- 0.002656, N = 30.4642690.4660450.4579960.4625890.453885MIN: 0.38MIN: 0.38MIN: 0.39MIN: 0.37MIN: 0.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUabcde2004006008001000SE +/- 14.33, N = 15873.15832.34844.36832.57845.73MIN: 841.29MIN: 744.45MIN: 819.84MIN: 807.52MIN: 832.311. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUabcde30060090012001500SE +/- 17.42, N = 151205.291184.141228.771184.121112.04MIN: 1166.28MIN: 1007.85MIN: 1195.53MIN: 1154.2MIN: 1093.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUabcde2004006008001000SE +/- 9.64, N = 5861.15878.49904.27888.73818.44MIN: 828.35MIN: 833.55MIN: 846.06MIN: 874.25MIN: 804.261. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: BMW27 - Compute: CPU-Onlyabcde48121620SE +/- 0.15, N = 414.0314.2014.2114.0414.30

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Classroom - Compute: CPU-Onlyabcde816243240SE +/- 0.30, N = 336.5036.6636.7936.3136.36

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Fishy Cat - Compute: CPU-Onlyabcde510152025SE +/- 0.09, N = 319.3619.7019.9420.1319.54

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Barbershop - Compute: CPU-Onlyabcde306090120150SE +/- 0.81, N = 3147.25147.73146.59147.18148.11

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Pabellon Barcelona - Compute: CPU-Onlyabcde1122334455SE +/- 0.10, N = 348.8147.8447.6547.7347.43

VVenC

Video Input: Bosphorus 4K - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.8Video Input: Bosphorus 4K - Video Preset: Fastabcde246810SE +/- 0.037, N = 36.3086.3326.3146.4436.3881. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

VVenC

Video Input: Bosphorus 4K - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.8Video Input: Bosphorus 4K - Video Preset: Fasterabcde3691215SE +/- 0.068, N = 1310.06510.0559.9679.95610.0671. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

VVenC

Video Input: Bosphorus 1080p - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.8Video Input: Bosphorus 1080p - Video Preset: Fastabcde48121620SE +/- 0.04, N = 317.1517.4017.2417.2116.791. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

VVenC

Video Input: Bosphorus 1080p - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.8Video Input: Bosphorus 1080p - Video Preset: Fasterabcde714212835SE +/- 0.17, N = 328.6630.9930.2127.6230.371. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

srsRAN Project

Test: Downlink Processor Benchmark

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.3Test: Downlink Processor Benchmarkabcde70140210280350SE +/- 2.03, N = 3326.5324.2326.7320.8324.1MIN: 71.2 / MAX: 731.7MIN: 68.9 / MAX: 734.8MIN: 72.5 / MAX: 731.1MIN: 71.3 / MAX: 723.1MIN: 69.9 / MAX: 729.71. (CXX) g++ options: -O3 -fno-trapping-math -fno-math-errno -march=native -mfma -lgtest

srsRAN Project

Test: PUSCH Processor Benchmark, Throughput Total

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.3Test: PUSCH Processor Benchmark, Throughput Totalabcde15003000450060007500SE +/- 87.81, N = 97122.46898.66547.47079.56774.5MIN: 4599.2 / MAX: 12734.9MIN: 2932.3 / MAX: 13017.6MIN: 3614.7 / MAX: 12722MIN: 4942.3 / MAX: 12824.3MIN: 3650.8 / MAX: 12618.41. (CXX) g++ options: -O3 -fno-trapping-math -fno-math-errno -march=native -mfma -lgtest

srsRAN Project

Test: PUSCH Processor Benchmark, Throughput Thread

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.3Test: PUSCH Processor Benchmark, Throughput Threadabcde714212835SE +/- 0.22, N = 329.929.829.728.828.9MIN: 19.5 / MAX: 52.7MIN: 18.3 / MAX: 53.3MIN: 18.8 / MAX: 52.3MIN: 15.8 / MAX: 52.3MIN: 18.9 / MAX: 52.71. (CXX) g++ options: -O3 -fno-trapping-math -fno-math-errno -march=native -mfma -lgtest

nginx

Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500abcde50K100K150K200K250KSE +/- 1323.62, N = 3250533.37246156.11246619.54247581.64248416.851. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Apache HTTP Server

Concurrent Requests: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.56Concurrent Requests: 500abcde20K40K60K80K100KSE +/- 98.05, N = 380395.5983834.8177777.0384694.7685357.841. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2


Phoronix Test Suite v10.8.5