8490h april

2 x Intel Xeon Platinum 8490H testing with a Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2304136-NE-8490HAPRI45&grt&rdt&rro.

8490h aprilProcessorMotherboardChipsetMemoryDiskGraphicsMonitorNetworkOSKernelDesktopDisplay ServerVulkanCompilerFile-SystemScreen Resolutionabcde2 x Intel Xeon Platinum 8490H @ 3.50GHz (120 Cores / 240 Threads)Quanta Cloud S6Q-MB-MPS (3A10.uh BIOS)Intel Device 1bce16 x 64 GB 4800MT/s Samsung M321R8GA0BB0-CQKEG2 x 1920GB SAMSUNG MZWLJ1T9HBJR-00007 + 960GB INTEL SSDSC2KG96ASPEEDVGA HDMI4 x Intel E810-C for QSFP + 2 x Intel X710 for 10GBASE-TUbuntu 22.046.2.0-060200rc7daily20230208-generic (x86_64)GNOME Shell 42.2X Server 1.21.1.31.2.204GCC 11.3.0 + Clang 14.0.0-1ubuntu1ext41920x1080OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: intel_pstate performance (EPP: performance) - CPU Microcode: 0x2b0000c0Python Details- Python 3.10.6Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling PBRSB-eIBRS: SW sequence + srbds: Not affected + tsx_async_abort: Not affected

8490h aprilapache: 500blender: BMW27 - CPU-Onlyblender: Classroom - CPU-Onlyblender: Fishy Cat - CPU-Onlyblender: Barbershop - CPU-Onlyblender: Pabellon Barcelona - CPU-Onlynginx: 500onednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUsrsran: Downlink Processor Benchmarksrsran: PUSCH Processor Benchmark, Throughput Totalsrsran: PUSCH Processor Benchmark, Throughput Threadtensorflow: CPU - 16 - AlexNettensorflow: CPU - 32 - AlexNettensorflow: CPU - 64 - AlexNettensorflow: CPU - 256 - AlexNettensorflow: CPU - 512 - AlexNettensorflow: CPU - 16 - GoogLeNettensorflow: CPU - 16 - ResNet-50tensorflow: CPU - 32 - GoogLeNettensorflow: CPU - 32 - ResNet-50tensorflow: CPU - 64 - GoogLeNettensorflow: CPU - 64 - ResNet-50tensorflow: CPU - 256 - GoogLeNettensorflow: CPU - 256 - ResNet-50tensorflow: CPU - 512 - GoogLeNettensorflow: CPU - 512 - ResNet-50vvenc: Bosphorus 4K - Fastvvenc: Bosphorus 4K - Fastervvenc: Bosphorus 1080p - Fastvvenc: Bosphorus 1080p - Fasterabcde80395.5914.0336.519.36147.2548.81250533.373.567592.497575.193790.8724765.884723.167790.40871114.26670.7244130.239960.4346580.2289711216.99881.2321304.570.2287410.4703360.464269873.1471205.29861.148326.57122.429.9372.88531.68743.731091.421227.69173.6464.28257.3383.13348103.48444.17130.44472.26135.886.30810.06517.14728.6683834.8114.2036.6619.70147.7347.84246156.113.050002.676994.624780.9784285.387343.046380.40298314.62840.7187460.3144270.4100290.2251971155.77840.9561232.870.2231420.4513930.466045832.3381184.14878.489324.26898.629.8386.55556.34741.871077.571231.85185.7864.31267.0284.42342.26102.21442.93128.89465.31134.346.33210.05517.39630.98877777.0314.2136.7919.94146.5947.65246619.543.635852.528485.307691.153325.420712.913810.40567714.54440.7164190.4335230.3974350.2193411209.39852.5771081.70.217420.4578930.457996844.3581228.77904.268326.76547.429.7370.67557.68751.671063.471214.36176.8463.78265.0884.98334.11104.52437.97127.52467.33135.226.3149.96717.24430.21184694.7614.0436.3120.13147.1847.73247581.643.504852.374794.875480.9813614.977182.837540.40841614.48910.7113060.2961520.3919570.2257421182.32731.0951205.380.219490.440410.462589832.5741184.12888.732320.87079.528.8391.88536.63739.021071.621225.54184.863.97249.7484.17346.11103.14441.29128.8462.37134.766.4439.95617.21127.61985357.8414.336.3619.54148.1147.43248416.853.446772.808695.347550.9893085.5583.021880.40032514.22120.7122480.3055030.4137350.2193481120.64848.6521200.190.222020.4462320.453885845.7261112.04818.438324.16774.528.9386.34564.79745.331062.061230.3185.2264.96270.3183.45346.2102.87441.44128.23469.14133.96.38810.06716.78930.369OpenBenchmarking.org

Apache HTTP Server

Concurrent Requests: 500

OpenBenchmarking.orgRequests Per Second, More Is BetterApache HTTP Server 2.4.56Concurrent Requests: 500edcba20K40K60K80K100KSE +/- 98.05, N = 385357.8484694.7677777.0383834.8180395.591. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: BMW27 - Compute: CPU-Onlyedcba48121620SE +/- 0.15, N = 414.3014.0414.2114.2014.03

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Classroom - Compute: CPU-Onlyedcba816243240SE +/- 0.30, N = 336.3636.3136.7936.6636.50

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Fishy Cat - Compute: CPU-Onlyedcba510152025SE +/- 0.09, N = 319.5420.1319.9419.7019.36

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Barbershop - Compute: CPU-Onlyedcba306090120150SE +/- 0.81, N = 3148.11147.18146.59147.73147.25

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 3.5Blend File: Pabellon Barcelona - Compute: CPU-Onlyedcba1122334455SE +/- 0.10, N = 347.4347.7347.6547.8448.81

nginx

Connections: 500

OpenBenchmarking.orgRequests Per Second, More Is Betternginx 1.23.2Connections: 500edcba50K100K150K200K250KSE +/- 1323.62, N = 3248416.85247581.64246619.54246156.11250533.371. (CC) gcc options: -lluajit-5.1 -lm -lssl -lcrypto -lpthread -ldl -std=c99 -O2

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUedcba0.81811.63622.45433.27244.0905SE +/- 0.17695, N = 153.446773.504853.635853.050003.56759MIN: 3.04MIN: 2.9MIN: 3.11MIN: 1.6MIN: 3.021. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUedcba0.6321.2641.8962.5283.16SE +/- 0.03552, N = 32.808692.374792.528482.676992.49757MIN: 2.24MIN: 1.92MIN: 2.05MIN: 2.13MIN: 2.051. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUedcba1.20322.40643.60964.81286.016SE +/- 0.18778, N = 125.347554.875485.307694.624785.19379MIN: 4.19MIN: 3.78MIN: 3.99MIN: 2.46MIN: 3.981. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUedcba0.25950.5190.77851.0381.2975SE +/- 0.002492, N = 30.9893080.9813611.1533200.9784280.872476MIN: 0.78MIN: 0.78MIN: 0.92MIN: 0.77MIN: 0.671. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUedcba1.32412.64823.97235.29646.6205SE +/- 0.08402, N = 155.558004.977185.420715.387345.88472MIN: 4.37MIN: 3.92MIN: 4.25MIN: 3.77MIN: 4.651. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUedcba0.71281.42562.13842.85123.564SE +/- 0.03679, N = 153.021882.837542.913813.046383.16779MIN: 2.44MIN: 2.21MIN: 2.28MIN: 2.17MIN: 2.491. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUedcba0.0920.1840.2760.3680.46SE +/- 0.000551, N = 30.4003250.4084160.4056770.4029830.408711MIN: 0.36MIN: 0.36MIN: 0.36MIN: 0.36MIN: 0.361. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUedcba48121620SE +/- 0.05, N = 314.2214.4914.5414.6314.27MIN: 12.7MIN: 12.72MIN: 12.86MIN: 12.83MIN: 12.671. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUedcba0.1630.3260.4890.6520.815SE +/- 0.002808, N = 30.7122480.7113060.7164190.7187460.724413MIN: 0.66MIN: 0.65MIN: 0.66MIN: 0.66MIN: 0.661. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUedcba0.09750.1950.29250.390.4875SE +/- 0.019200, N = 150.3055030.2961520.4335230.3144270.239960MIN: 0.18MIN: 0.18MIN: 0.18MIN: 0.17MIN: 0.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUedcba0.09780.19560.29340.39120.489SE +/- 0.003330, N = 150.4137350.3919570.3974350.4100290.434658MIN: 0.33MIN: 0.32MIN: 0.32MIN: 0.31MIN: 0.331. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUedcba0.05150.1030.15450.2060.2575SE +/- 0.001233, N = 30.2193480.2257420.2193410.2251970.228971MIN: 0.21MIN: 0.21MIN: 0.2MIN: 0.2MIN: 0.21. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUedcba30060090012001500SE +/- 38.79, N = 121120.641182.321209.391155.771216.99MIN: 1089.24MIN: 1123.65MIN: 1153.33MIN: 781.24MIN: 1149.611. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUedcba2004006008001000SE +/- 10.27, N = 15848.65731.10852.58840.96881.23MIN: 823.74MIN: 715.12MIN: 818.16MIN: 756.78MIN: 840.731. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUedcba30060090012001500SE +/- 23.27, N = 141200.191205.381081.701232.871304.57MIN: 1170.19MIN: 1177.67MIN: 1010MIN: 1015.69MIN: 1219.161. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUedcba0.05150.1030.15450.2060.2575SE +/- 0.002565, N = 30.2220200.2194900.2174200.2231420.228741MIN: 0.2MIN: 0.2MIN: 0.19MIN: 0.19MIN: 0.191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUedcba0.10580.21160.31740.42320.529SE +/- 0.003398, N = 110.4462320.4404100.4578930.4513930.470336MIN: 0.35MIN: 0.35MIN: 0.35MIN: 0.34MIN: 0.361. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUedcba0.10490.20980.31470.41960.5245SE +/- 0.002656, N = 30.4538850.4625890.4579960.4660450.464269MIN: 0.4MIN: 0.37MIN: 0.39MIN: 0.38MIN: 0.381. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUedcba2004006008001000SE +/- 14.33, N = 15845.73832.57844.36832.34873.15MIN: 832.31MIN: 807.52MIN: 819.84MIN: 744.45MIN: 841.291. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUedcba30060090012001500SE +/- 17.42, N = 151112.041184.121228.771184.141205.29MIN: 1093.03MIN: 1154.2MIN: 1195.53MIN: 1007.85MIN: 1166.281. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUedcba2004006008001000SE +/- 9.64, N = 5818.44888.73904.27878.49861.15MIN: 804.26MIN: 874.25MIN: 846.06MIN: 833.55MIN: 828.351. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl -lpthread

srsRAN Project

Test: Downlink Processor Benchmark

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.3Test: Downlink Processor Benchmarkedcba70140210280350SE +/- 2.03, N = 3324.1320.8326.7324.2326.5MIN: 69.9 / MAX: 729.7MIN: 71.3 / MAX: 723.1MIN: 72.5 / MAX: 731.1MIN: 68.9 / MAX: 734.8MIN: 71.2 / MAX: 731.71. (CXX) g++ options: -O3 -fno-trapping-math -fno-math-errno -march=native -mfma -lgtest

srsRAN Project

Test: PUSCH Processor Benchmark, Throughput Total

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.3Test: PUSCH Processor Benchmark, Throughput Totaledcba15003000450060007500SE +/- 87.81, N = 96774.57079.56547.46898.67122.4MIN: 3650.8 / MAX: 12618.4MIN: 4942.3 / MAX: 12824.3MIN: 3614.7 / MAX: 12722MIN: 2932.3 / MAX: 13017.6MIN: 4599.2 / MAX: 12734.91. (CXX) g++ options: -O3 -fno-trapping-math -fno-math-errno -march=native -mfma -lgtest

srsRAN Project

Test: PUSCH Processor Benchmark, Throughput Thread

OpenBenchmarking.orgMbps, More Is BettersrsRAN Project 23.3Test: PUSCH Processor Benchmark, Throughput Threadedcba714212835SE +/- 0.22, N = 328.928.829.729.829.9MIN: 18.9 / MAX: 52.7MIN: 15.8 / MAX: 52.3MIN: 18.8 / MAX: 52.3MIN: 18.3 / MAX: 53.3MIN: 19.5 / MAX: 52.71. (CXX) g++ options: -O3 -fno-trapping-math -fno-math-errno -march=native -mfma -lgtest

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: AlexNetedcba90180270360450SE +/- 3.12, N = 3386.34391.88370.67386.55372.88

TensorFlow

Device: CPU - Batch Size: 32 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: AlexNetedcba120240360480600SE +/- 5.22, N = 6564.79536.63557.68556.34531.68

TensorFlow

Device: CPU - Batch Size: 64 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: AlexNetedcba160320480640800SE +/- 6.00, N = 3745.33739.02751.67741.87743.73

TensorFlow

Device: CPU - Batch Size: 256 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: AlexNetedcba2004006008001000SE +/- 5.13, N = 31062.061071.621063.471077.571091.42

TensorFlow

Device: CPU - Batch Size: 512 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: AlexNetedcba30060090012001500SE +/- 2.69, N = 31230.301225.541214.361231.851227.69

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: GoogLeNetedcba4080120160200SE +/- 1.60, N = 3185.22184.80176.84185.78173.64

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 16 - Model: ResNet-50edcba1428425670SE +/- 0.45, N = 364.9663.9763.7864.3164.28

TensorFlow

Device: CPU - Batch Size: 32 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: GoogLeNetedcba60120180240300SE +/- 0.97, N = 3270.31249.74265.08267.02257.33

TensorFlow

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 32 - Model: ResNet-50edcba20406080100SE +/- 0.33, N = 383.4584.1784.9884.4283.13

TensorFlow

Device: CPU - Batch Size: 64 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: GoogLeNetedcba80160240320400SE +/- 2.89, N = 3346.20346.11334.11342.26348.00

TensorFlow

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 64 - Model: ResNet-50edcba20406080100SE +/- 0.44, N = 3102.87103.14104.52102.21103.48

TensorFlow

Device: CPU - Batch Size: 256 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: GoogLeNetedcba100200300400500SE +/- 3.52, N = 3441.44441.29437.97442.93444.17

TensorFlow

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 256 - Model: ResNet-50edcba306090120150SE +/- 0.05, N = 3128.23128.80127.52128.89130.44

TensorFlow

Device: CPU - Batch Size: 512 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: GoogLeNetedcba100200300400500SE +/- 4.06, N = 3469.14462.37467.33465.31472.26

TensorFlow

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.12Device: CPU - Batch Size: 512 - Model: ResNet-50edcba306090120150SE +/- 1.30, N = 3133.90134.76135.22134.34135.88

VVenC

Video Input: Bosphorus 4K - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.8Video Input: Bosphorus 4K - Video Preset: Fastedcba246810SE +/- 0.037, N = 36.3886.4436.3146.3326.3081. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

VVenC

Video Input: Bosphorus 4K - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.8Video Input: Bosphorus 4K - Video Preset: Fasteredcba3691215SE +/- 0.068, N = 1310.0679.9569.96710.05510.0651. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

VVenC

Video Input: Bosphorus 1080p - Video Preset: Fast

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.8Video Input: Bosphorus 1080p - Video Preset: Fastedcba48121620SE +/- 0.04, N = 316.7917.2117.2417.4017.151. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto

VVenC

Video Input: Bosphorus 1080p - Video Preset: Faster

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.8Video Input: Bosphorus 1080p - Video Preset: Fasteredcba714212835SE +/- 0.17, N = 330.3727.6230.2130.9928.661. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto


Phoronix Test Suite v10.8.5