9684x-march

2 x AMD EPYC 9684X 96-Core testing with a AMD Titanite_4G (RTI1007B BIOS) and ASPEED on Ubuntu 23.10 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2403274-NE-9684XMARC65&sor&grs.

9684x-march ProcessorMotherboardChipsetMemoryDiskGraphicsNetworkOSKernelCompilerFile-SystemScreen ResolutionPREab2 x AMD EPYC 9684X 96-Core @ 2.55GHz (192 Cores / 384 Threads)AMD Titanite_4G (RTI1007B BIOS)AMD Device 14a41520GB3201GB Micron_7450_MTFDKCB3T2TFS + 257GB Flash DriveASPEEDBroadcom NetXtreme BCM5720 PCIeUbuntu 23.106.5.0-25-generic (x86_64)GCC 13.2.0ext4640x480OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-XYspKM/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa10113e Python Details- Python 3.11.6Security Details- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of Safe RET + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

9684x-march tensorflow: CPU - 16 - ResNet-50tensorflow: CPU - 32 - ResNet-50tensorflow: CPU - 32 - AlexNettensorflow: CPU - 32 - GoogLeNetpytorch: CPU - 512 - ResNet-152tensorflow: CPU - 1 - GoogLeNettensorflow: CPU - 64 - GoogLeNetpytorch: CPU - 32 - ResNet-152pytorch: CPU - 1 - ResNet-152tensorflow: CPU - 16 - GoogLeNetpytorch: CPU - 16 - ResNet-50pytorch: CPU - 64 - ResNet-152tensorflow: CPU - 16 - AlexNetrocksdb: Overwritepytorch: CPU - 32 - ResNet-50tensorflow: CPU - 1 - ResNet-50rocksdb: Read While Writingpytorch: CPU - 1 - Efficientnet_v2_lpytorch: CPU - 64 - ResNet-50tensorflow: CPU - 256 - AlexNettensorflow: CPU - 64 - AlexNetpytorch: CPU - 512 - ResNet-50brl-cad: VGR Performance Metricpytorch: CPU - 256 - ResNet-152tensorflow: CPU - 512 - GoogLeNetpytorch: CPU - 16 - ResNet-152pytorch: CPU - 256 - ResNet-50blender: Junkshop - CPU-Onlytensorflow: CPU - 1 - AlexNetpytorch: CPU - 256 - Efficientnet_v2_ltensorflow: CPU - 512 - AlexNetrocksdb: Update Randtensorflow: CPU - 64 - ResNet-50blender: Fishy Cat - CPU-Onlyblender: BMW27 - CPU-Onlytensorflow: CPU - 256 - ResNet-50pytorch: CPU - 512 - Efficientnet_v2_lpytorch: CPU - 64 - Efficientnet_v2_lpytorch: CPU - 32 - Efficientnet_v2_lpytorch: CPU - 16 - Efficientnet_v2_lpytorch: CPU - 1 - ResNet-50rocksdb: Read Rand Write Randbuild-mesa: Time To Compileblender: Pabellon Barcelona - CPU-Onlytensorflow: CPU - 512 - ResNet-50blender: Barbershop - CPU-Onlyrocksdb: Rand Readtensorflow: CPU - 256 - GoogLeNetblender: Classroom - CPU-Onlytensorflow: CPU - 512 - VGG-16tensorflow: CPU - 256 - VGG-16tensorflow: CPU - 64 - VGG-16tensorflow: CPU - 32 - VGG-16tensorflow: CPU - 16 - VGG-16tensorflow: CPU - 1 - VGG-16PREab39.6865.88424.06185.169.4712.58275.348.729.97112.6420.939.21242.2942104920.194.05271303636.2921.591652.23765.5520.4359566128.92493.318.9321.2011.421.162.291980.5142126687.729.967.55119.832.312.322.332.3323.06361914214.6622.99140.5967.381105306233400.0318.0341.2660.25436.25176.369.3313.20273.689.3410.58114.2621.538.91247.5542161620.843.9264066626.4521.081604.52749.4621.0159275649.09484.029.0120.7711.4420.782.332010.5642568788.939.857.55118.882.332.312.312.3323.20364326314.75623.1140.4967.661108892776399.4618.0835.9266.68461.6190.748.8113.52256.879.2810.60119.2220.368.79236.5643960221.034.01261355676.5020.901656.79743.521.0157940408.85494.469.1220.8511.6121.012.312010.642739188.959.947.48118.772.322.332.322.3523.24363892914.71123.11141.1667.651108469308400.6118.04135.78127.1895.9176.0460.699.39OpenBenchmarking.org

TensorFlow

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 16 - Model: ResNet-50aPREb91827364541.2639.6835.92

TensorFlow

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 32 - Model: ResNet-50bPREa153045607566.6865.8860.25

TensorFlow

Device: CPU - Batch Size: 32 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 32 - Model: AlexNetbaPRE100200300400500SE +/- 6.62, N = 15461.60436.25424.06

TensorFlow

Device: CPU - Batch Size: 32 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 32 - Model: GoogLeNetbPREa4080120160200190.74185.16176.36

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: ResNet-152PREab3691215SE +/- 0.10, N = 39.479.338.81MIN: 5.17 / MAX: 9.87MIN: 4.69 / MAX: 9.66MIN: 4.87 / MAX: 8.97

TensorFlow

Device: CPU - Batch Size: 1 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 1 - Model: GoogLeNetbaPRE3691215SE +/- 0.14, N = 1513.5213.2012.58

TensorFlow

Device: CPU - Batch Size: 64 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 64 - Model: GoogLeNetPREab60120180240300275.34273.68256.87

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 32 - Model: ResNet-152abPRE3691215SE +/- 0.08, N = 39.349.288.72MIN: 4.74 / MAX: 9.74MIN: 5.31 / MAX: 9.48MIN: 5.23 / MAX: 9.06

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: ResNet-152baPRE3691215SE +/- 0.10, N = 1510.6010.589.97MIN: 4.86 / MAX: 11.57MIN: 4.55 / MAX: 11.67MIN: 4.85 / MAX: 10.69

TensorFlow

Device: CPU - Batch Size: 16 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 16 - Model: GoogLeNetbaPRE306090120150119.22114.26112.64

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 16 - Model: ResNet-50aPREb510152025SE +/- 0.16, N = 321.5320.9320.36MIN: 12.64 / MAX: 22.28MIN: 12.91 / MAX: 21.51MIN: 11.37 / MAX: 21.4

PyTorch

Device: CPU - Batch Size: 64 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 64 - Model: ResNet-152PREab3691215SE +/- 0.09, N = 129.218.918.79MIN: 4.8 / MAX: 9.43MIN: 4.5 / MAX: 9.7MIN: 4.6 / MAX: 8.97

TensorFlow

Device: CPU - Batch Size: 16 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 16 - Model: AlexNetaPREb50100150200250SE +/- 2.30, N = 15247.55242.29236.56

RocksDB

Test: Overwrite

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: OverwritebaPRE90K180K270K360K450K4396024216164210491. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

PyTorch

Device: CPU - Batch Size: 32 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 32 - Model: ResNet-50baPRE510152025SE +/- 0.16, N = 1521.0320.8420.19MIN: 15.23 / MAX: 21.8MIN: 11.24 / MAX: 22.33MIN: 11.95 / MAX: 21.04

TensorFlow

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 1 - Model: ResNet-50PREba0.91131.82262.73393.64524.55654.054.013.90

RocksDB

Test: Read While Writing

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Read While WritingPREab6M12M18M24M30M2713036326406662261355671. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

PyTorch

Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_lbaPRE246810SE +/- 0.09, N = 36.506.456.29MIN: 3.35 / MAX: 6.62MIN: 3.05 / MAX: 6.85MIN: 3.09 / MAX: 6.44

PyTorch

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 64 - Model: ResNet-50PREab510152025SE +/- 0.23, N = 321.5921.0820.90MIN: 14.02 / MAX: 22.21MIN: 13.2 / MAX: 22.07MIN: 13.13 / MAX: 21.57

TensorFlow

Device: CPU - Batch Size: 256 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 256 - Model: AlexNetbPREa4008001200160020001656.791652.231604.52

TensorFlow

Device: CPU - Batch Size: 64 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 64 - Model: AlexNetPREab170340510680850SE +/- 5.39, N = 15765.55749.46743.50

PyTorch

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: ResNet-50baPRE510152025SE +/- 0.14, N = 1521.0121.0120.43MIN: 14.13 / MAX: 21.43MIN: 11.92 / MAX: 22.65MIN: 13.46 / MAX: 21.1

BRL-CAD

VGR Performance Metric

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.38.2VGR Performance MetricPREab1.3M2.6M3.9M5.2M6.5M5956612592756457940401. (CXX) g++ options: -std=c++17 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lnetpbm -lregex_brl -lz_brl -lassimp -ldl -lm -ltk8.6

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: ResNet-152aPREb3691215SE +/- 0.10, N = 129.098.928.85MIN: 4.84 / MAX: 10.03MIN: 5.04 / MAX: 9.16MIN: 5.25 / MAX: 9.05

TensorFlow

Device: CPU - Batch Size: 512 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 512 - Model: GoogLeNetbPREa110220330440550494.46493.31484.02

PyTorch

Device: CPU - Batch Size: 16 - Model: ResNet-152

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 16 - Model: ResNet-152baPRE3691215SE +/- 0.09, N = 39.129.018.93MIN: 8.99 / MAX: 9.29MIN: 4.81 / MAX: 9.31MIN: 8.8 / MAX: 9.04

PyTorch

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: ResNet-50PREba510152025SE +/- 0.10, N = 321.2020.8520.77MIN: 12.68 / MAX: 21.88MIN: 12.74 / MAX: 21.39MIN: 12.97 / MAX: 21.67

Blender

Blend File: Junkshop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: Junkshop - Compute: CPU-OnlyPREab369121511.4011.4411.61

TensorFlow

Device: CPU - Batch Size: 1 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 1 - Model: AlexNetPREba510152025SE +/- 0.16, N = 1521.1621.0120.78

PyTorch

Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 256 - Model: Efficientnet_v2_labPRE0.52431.04861.57292.09722.6215SE +/- 0.01, N = 32.332.312.29MIN: 1.59 / MAX: 2.78MIN: 1.92 / MAX: 2.67MIN: 1.79 / MAX: 2.72

TensorFlow

Device: CPU - Batch Size: 512 - Model: AlexNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 512 - Model: AlexNetbaPRE4008001200160020002010.602010.561980.51

RocksDB

Test: Update Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Update RandombaPRE90K180K270K360K450K4273914256874212661. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

TensorFlow

Device: CPU - Batch Size: 64 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 64 - Model: ResNet-50baPRE2040608010088.9588.9387.72

Blender

Blend File: Fishy Cat - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: Fishy Cat - Compute: CPU-OnlyabPRE36912159.859.949.96

Blender

Blend File: BMW27 - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: BMW27 - Compute: CPU-OnlybaPRE2468107.487.557.55

TensorFlow

Device: CPU - Batch Size: 256 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 256 - Model: ResNet-50PREab306090120150119.83118.88118.77

PyTorch

Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 512 - Model: Efficientnet_v2_labPRE0.52431.04861.57292.09722.6215SE +/- 0.01, N = 32.332.322.31MIN: 1.58 / MAX: 2.83MIN: 1.79 / MAX: 2.71MIN: 1.7 / MAX: 2.84

PyTorch

Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_lbPREa0.52431.04861.57292.09722.6215SE +/- 0.01, N = 32.332.322.31MIN: 1.78 / MAX: 2.77MIN: 1.9 / MAX: 2.75MIN: 1.53 / MAX: 2.83

PyTorch

Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_lPREba0.52431.04861.57292.09722.6215SE +/- 0.01, N = 32.332.322.31MIN: 1.78 / MAX: 2.8MIN: 1.94 / MAX: 2.8MIN: 1.88 / MAX: 2.74

PyTorch

Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_lbaPRE0.52881.05761.58642.11522.644SE +/- 0.01, N = 32.352.332.33MIN: 1.82 / MAX: 2.76MIN: 1.77 / MAX: 2.9MIN: 1.76 / MAX: 2.72

PyTorch

Device: CPU - Batch Size: 1 - Model: ResNet-50

OpenBenchmarking.orgbatches/sec, More Is BetterPyTorch 2.2.1Device: CPU - Batch Size: 1 - Model: ResNet-50baPRE612182430SE +/- 0.20, N = 1523.2423.2023.06MIN: 13.48 / MAX: 24.22MIN: 12.21 / MAX: 25.13MIN: 12.95 / MAX: 24.52

RocksDB

Test: Read Random Write Random

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Read Random Write RandomabPRE800K1600K2400K3200K4000K3643263363892936191421. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

Timed Mesa Compilation

Time To Compile

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Mesa Compilation 24.0Time To CompilePREba48121620SE +/- 0.04, N = 314.6614.7114.76

Blender

Blend File: Pabellon Barcelona - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: Pabellon Barcelona - Compute: CPU-OnlyPREab61218243022.9923.1023.11

TensorFlow

Device: CPU - Batch Size: 512 - Model: ResNet-50

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 512 - Model: ResNet-50bPREa306090120150141.16140.59140.49

Blender

Blend File: Barbershop - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: Barbershop - Compute: CPU-OnlyPREba153045607567.3867.6567.66

RocksDB

Test: Random Read

OpenBenchmarking.orgOp/s, More Is BetterRocksDB 9.0Test: Random ReadabPRE200M400M600M800M1000M1108892776110846930811053062331. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

TensorFlow

Device: CPU - Batch Size: 256 - Model: GoogLeNet

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 256 - Model: GoogLeNetbPREa90180270360450400.61400.03399.46

Blender

Blend File: Classroom - Compute: CPU-Only

OpenBenchmarking.orgSeconds, Fewer Is BetterBlender 4.1Blend File: Classroom - Compute: CPU-OnlyPREba4812162018.0318.0418.08

TensorFlow

Device: CPU - Batch Size: 512 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 512 - Model: VGG-16b306090120150135.78

TensorFlow

Device: CPU - Batch Size: 256 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 256 - Model: VGG-16b306090120150127.18

TensorFlow

Device: CPU - Batch Size: 64 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 64 - Model: VGG-16b2040608010095.91

TensorFlow

Device: CPU - Batch Size: 32 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 32 - Model: VGG-16b2040608010076.04

TensorFlow

Device: CPU - Batch Size: 16 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 16 - Model: VGG-16b142842567060.69

TensorFlow

Device: CPU - Batch Size: 1 - Model: VGG-16

OpenBenchmarking.orgimages/sec, More Is BetterTensorFlow 2.16.1Device: CPU - Batch Size: 1 - Model: VGG-16b36912159.39


Phoronix Test Suite v10.8.5