new ai

AMD Ryzen 9 7950X 16-Core testing with a ASUS ROG CROSSHAIR X670E HERO (1101 BIOS) and AMD Radeon RX 7900 XTX 24GB on Ubuntu 22.04 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2304292-PTS-NEWAI12536.

new aiProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionabcAMD Ryzen 9 7950X 16-Core @ 4.50GHz (16 Cores / 32 Threads)ASUS ROG CROSSHAIR X670E HERO (1101 BIOS)AMD Device 14d832GB2048GB SOLIDIGM SSDPFKKW020X7 + 2000GBAMD Radeon RX 7900 XTX 24GB (2304/1249MHz)AMD Device ab30ASUS MG28UIntel I225-V + Intel Wi-Fi 6 AX210/AX211/AX411Ubuntu 22.046.3.0-060300rc7daily20230417-generic (x86_64)GNOME Shell 42.5X Server 1.21.1.3 + Wayland4.6 Mesa 23.2.0-devel (git-f6fb189 2023-04-18 jammy-oibaf-ppa) (LLVM 15.0.7 DRM 3.52)1.3.246GCC 11.3.0ext43840x2160OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - CPU Microcode: 0xa601203Python Details- Python 3.10.9Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

new aisvt-av1: Preset 4 - Bosphorus 4Ksvt-av1: Preset 8 - Bosphorus 4Ksvt-av1: Preset 12 - Bosphorus 4Ksvt-av1: Preset 13 - Bosphorus 4Ksvt-av1: Preset 4 - Bosphorus 1080psvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 12 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 1080pfaiss: demo_sift1Mfaiss: bench_polysemous_sift1m - PQ baselinefaiss: bench_polysemous_sift1m - Polysemous 64faiss: bench_polysemous_sift1m - Polysemous 62faiss: bench_polysemous_sift1m - Polysemous 58faiss: bench_polysemous_sift1m - Polysemous 54faiss: bench_polysemous_sift1m - Polysemous 50faiss: bench_polysemous_sift1m - Polysemous 46faiss: bench_polysemous_sift1m - Polysemous 42faiss: bench_polysemous_sift1m - Polysemous 38faiss: bench_polysemous_sift1m - Polysemous 34faiss: bench_polysemous_sift1m - Polysemous 30intel-tensorflow: resnet50_fp32_pretrained_model - 1intel-tensorflow: resnet50_fp32_pretrained_model - 1intel-tensorflow: resnet50_int8_pretrained_model - 1intel-tensorflow: resnet50_int8_pretrained_model - 1intel-tensorflow: resnet50_fp32_pretrained_model - 16intel-tensorflow: resnet50_fp32_pretrained_model - 32intel-tensorflow: resnet50_fp32_pretrained_model - 64intel-tensorflow: resnet50_fp32_pretrained_model - 96intel-tensorflow: resnet50_int8_pretrained_model - 16intel-tensorflow: resnet50_int8_pretrained_model - 32intel-tensorflow: resnet50_int8_pretrained_model - 64intel-tensorflow: resnet50_int8_pretrained_model - 96intel-tensorflow: resnet50_fp32_pretrained_model - 256intel-tensorflow: resnet50_fp32_pretrained_model - 512intel-tensorflow: resnet50_int8_pretrained_model - 256intel-tensorflow: resnet50_int8_pretrained_model - 512intel-tensorflow: inceptionv4_fp32_pretrained_model - 1intel-tensorflow: inceptionv4_fp32_pretrained_model - 1intel-tensorflow: inceptionv4_int8_pretrained_model - 1intel-tensorflow: inceptionv4_int8_pretrained_model - 1intel-tensorflow: mobilenetv1_fp32_pretrained_model - 1intel-tensorflow: mobilenetv1_int8_pretrained_model - 1intel-tensorflow: inceptionv4_fp32_pretrained_model - 16intel-tensorflow: inceptionv4_fp32_pretrained_model - 32intel-tensorflow: inceptionv4_fp32_pretrained_model - 64intel-tensorflow: inceptionv4_fp32_pretrained_model - 96intel-tensorflow: inceptionv4_int8_pretrained_model - 16intel-tensorflow: inceptionv4_int8_pretrained_model - 32intel-tensorflow: inceptionv4_int8_pretrained_model - 64intel-tensorflow: inceptionv4_int8_pretrained_model - 96intel-tensorflow: mobilenetv1_fp32_pretrained_model - 16intel-tensorflow: mobilenetv1_fp32_pretrained_model - 32intel-tensorflow: mobilenetv1_fp32_pretrained_model - 64intel-tensorflow: mobilenetv1_fp32_pretrained_model - 96intel-tensorflow: mobilenetv1_int8_pretrained_model - 16intel-tensorflow: mobilenetv1_int8_pretrained_model - 32intel-tensorflow: mobilenetv1_int8_pretrained_model - 64intel-tensorflow: mobilenetv1_int8_pretrained_model - 96intel-tensorflow: inceptionv4_fp32_pretrained_model - 256intel-tensorflow: inceptionv4_fp32_pretrained_model - 512intel-tensorflow: inceptionv4_int8_pretrained_model - 256intel-tensorflow: inceptionv4_int8_pretrained_model - 512intel-tensorflow: mobilenetv1_fp32_pretrained_model - 256intel-tensorflow: mobilenetv1_fp32_pretrained_model - 512intel-tensorflow: mobilenetv1_int8_pretrained_model - 256intel-tensorflow: mobilenetv1_int8_pretrained_model - 512zendnn-tensorflow: tf_resnetv1_50_imagenet_224_224_6.97G_1.1_Z4.0 - 1zendnn-tensorflow: tf_inceptionv4_imagenet_299_299_24.55G_1.1_Z4.0 - 1zendnn-tensorflow: tf_resnetv1_50_imagenet_224_224_6.97G_1.1_Z4.0 - 16zendnn-tensorflow: tf_resnetv1_50_imagenet_224_224_6.97G_1.1_Z4.0 - 32zendnn-tensorflow: tf_resnetv1_50_imagenet_224_224_6.97G_1.1_Z4.0 - 64zendnn-tensorflow: tf_resnetv1_50_imagenet_224_224_6.97G_1.1_Z4.0 - 96zendnn-tensorflow: tf_inceptionv4_imagenet_299_299_24.55G_1.1_Z4.0 - 16zendnn-tensorflow: tf_inceptionv4_imagenet_299_299_24.55G_1.1_Z4.0 - 32zendnn-tensorflow: tf_inceptionv4_imagenet_299_299_24.55G_1.1_Z4.0 - 64zendnn-tensorflow: tf_inceptionv4_imagenet_299_299_24.55G_1.1_Z4.0 - 96zendnn-tensorflow: tf_resnetv1_50_imagenet_224_224_6.97G_1.1_Z4.0 - 256zendnn-tensorflow: tf_resnetv1_50_imagenet_224_224_6.97G_1.1_Z4.0 - 512zendnn-tensorflow: tf_inceptionv4_imagenet_299_299_24.55G_1.1_Z4.0 - 256zendnn-tensorflow: tf_inceptionv4_imagenet_299_299_24.55G_1.1_Z4.0 - 512zendnn-tensorflow: tf_mobilenetv1_1.0_imagenet_224_224_1.14G_1.1_Z4.0 - 1zendnn-tensorflow: tf_mobilenetv1_1.0_imagenet_224_224_1.14G_1.1_Z4.0 - 16zendnn-tensorflow: tf_mobilenetv1_1.0_imagenet_224_224_1.14G_1.1_Z4.0 - 32zendnn-tensorflow: tf_mobilenetv1_1.0_imagenet_224_224_1.14G_1.1_Z4.0 - 64zendnn-tensorflow: tf_mobilenetv1_1.0_imagenet_224_224_1.14G_1.1_Z4.0 - 96zendnn-tensorflow: tf_mobilenetv1_1.0_imagenet_224_224_1.14G_1.1_Z4.0 - 256zendnn-tensorflow: tf_mobilenetv1_1.0_imagenet_224_224_1.14G_1.1_Z4.0 - 512abc6.11175.893210.908206.52014.168119.996716.460698.56764.0092.6244.1273.4662.1321.2840.8300.6120.5250.4950.4850.482139.0517.192547.7771.826264.850252.894247.813245.027800.103824.027830.205803.346247.463249.615779.333774.17761.6517.070104.9610.5601591.734525.9083.6380.4279.1579.05311.78312.25309.86300.911107.681047.32952.20914.914671.255094.814327.784293.1080.3681.04288.45290.66867.02856.503482.013263.948.07023.159241.34228.35225.81228.1162.7862.8264.6365.86238.11242.6568.2470.361.586953.78770.81715.78691.01650.17643.446.16575.807213.399207.37314.091120.306711.29699.65263.9032.6134.0913.4492.1261.2780.8250.6100.5220.4940.4850.481139.5197.168503.5001.986267.531253.600247.959246.187798.920825.337837.083803.687247.089249.918779.241775.25861.0017.296105.5010.3511573.544539.9083.2580.5679.5079.44319.25315.79309.26303.131108.831047.98951.88915.834689.945078.874330.684285.7680.4081.07289.43288.25867.136.11175.612210.656206.94314.234118.000710.846698.00964.0602.6274.1173.4682.1341.2850.8310.6120.5250.4960.4860.483135.8947.360523.1711.924267.660252.684248.704245.605793.970830.170833.729805.521247.448249.763780.454775.55757.4217.20099.0010.4451606.414528.3282.6580.2879.0078.91312.28311.68314.14297.391107.451048.20951.85915.294667.755092.764334.434278.5880.2080.95287.48288.82866.92855.613482.913272.668.12223.131240.72226.86226.03230.1462.7962.9164.6165.59239.23243.2968.4270.061.573953.58768.31717.29690.86650.04642.22OpenBenchmarking.org

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 4 - Input: Bosphorus 4Kabc246810SE +/- 0.030, N = 3SE +/- 0.017, N = 3SE +/- 0.004, N = 36.1116.1656.1111. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 8 - Input: Bosphorus 4Kabc20406080100SE +/- 0.39, N = 3SE +/- 0.11, N = 3SE +/- 0.56, N = 375.8975.8175.611. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 12 - Input: Bosphorus 4Kabc50100150200250SE +/- 1.35, N = 3SE +/- 0.58, N = 3SE +/- 0.55, N = 3210.91213.40210.661. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 4K

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 13 - Input: Bosphorus 4Kabc50100150200250SE +/- 1.23, N = 3SE +/- 0.95, N = 3SE +/- 0.70, N = 3206.52207.37206.941. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 4 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 4 - Input: Bosphorus 1080pabc48121620SE +/- 0.02, N = 3SE +/- 0.08, N = 3SE +/- 0.05, N = 314.1714.0914.231. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 8 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 8 - Input: Bosphorus 1080pabc306090120150SE +/- 0.60, N = 3SE +/- 0.75, N = 3SE +/- 0.63, N = 3120.00120.31118.001. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 12 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 12 - Input: Bosphorus 1080pabc150300450600750SE +/- 5.15, N = 3SE +/- 8.08, N = 3SE +/- 6.31, N = 3716.46711.29710.851. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

SVT-AV1

Encoder Mode: Preset 13 - Input: Bosphorus 1080p

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.5Encoder Mode: Preset 13 - Input: Bosphorus 1080pabc150300450600750SE +/- 6.99, N = 15SE +/- 7.24, N = 3SE +/- 5.00, N = 15698.57699.65698.011. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Faiss

Test: demo_sift1M

OpenBenchmarking.orgSeconds, Fewer Is BetterFaiss 1.7.4Test: demo_sift1Mabc1428425670SE +/- 0.03, N = 3SE +/- 0.02, N = 3SE +/- 0.13, N = 364.0163.9064.061. (F9X) gfortran options: -O2 -frecursive -m64 -fopenmp -msse3 -mssse3 -msse4.1 -mavx -mavx2 -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Faiss

Test: bench_polysemous_sift1m - PQ baseline

OpenBenchmarking.orgms per query, Fewer Is BetterFaiss 1.7.4Test: bench_polysemous_sift1m - PQ baselineabc0.59111.18221.77332.36442.9555SE +/- 0.005, N = 3SE +/- 0.004, N = 3SE +/- 0.015, N = 32.6242.6132.6271. (F9X) gfortran options: -O2 -frecursive -m64 -fopenmp -msse3 -mssse3 -msse4.1 -mavx -mavx2 -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Faiss

Test: bench_polysemous_sift1m - Polysemous 64

OpenBenchmarking.orgms per query, Fewer Is BetterFaiss 1.7.4Test: bench_polysemous_sift1m - Polysemous 64abc0.92861.85722.78583.71444.643SE +/- 0.002, N = 3SE +/- 0.008, N = 3SE +/- 0.024, N = 34.1274.0914.1171. (F9X) gfortran options: -O2 -frecursive -m64 -fopenmp -msse3 -mssse3 -msse4.1 -mavx -mavx2 -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Faiss

Test: bench_polysemous_sift1m - Polysemous 62

OpenBenchmarking.orgms per query, Fewer Is BetterFaiss 1.7.4Test: bench_polysemous_sift1m - Polysemous 62abc0.78031.56062.34093.12123.9015SE +/- 0.002, N = 3SE +/- 0.002, N = 3SE +/- 0.021, N = 33.4663.4493.4681. (F9X) gfortran options: -O2 -frecursive -m64 -fopenmp -msse3 -mssse3 -msse4.1 -mavx -mavx2 -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Faiss

Test: bench_polysemous_sift1m - Polysemous 58

OpenBenchmarking.orgms per query, Fewer Is BetterFaiss 1.7.4Test: bench_polysemous_sift1m - Polysemous 58abc0.48020.96041.44061.92082.401SE +/- 0.001, N = 3SE +/- 0.006, N = 3SE +/- 0.013, N = 32.1322.1262.1341. (F9X) gfortran options: -O2 -frecursive -m64 -fopenmp -msse3 -mssse3 -msse4.1 -mavx -mavx2 -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Faiss

Test: bench_polysemous_sift1m - Polysemous 54

OpenBenchmarking.orgms per query, Fewer Is BetterFaiss 1.7.4Test: bench_polysemous_sift1m - Polysemous 54abc0.28910.57820.86731.15641.4455SE +/- 0.000, N = 3SE +/- 0.005, N = 3SE +/- 0.008, N = 31.2841.2781.2851. (F9X) gfortran options: -O2 -frecursive -m64 -fopenmp -msse3 -mssse3 -msse4.1 -mavx -mavx2 -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Faiss

Test: bench_polysemous_sift1m - Polysemous 50

OpenBenchmarking.orgms per query, Fewer Is BetterFaiss 1.7.4Test: bench_polysemous_sift1m - Polysemous 50abc0.1870.3740.5610.7480.935SE +/- 0.001, N = 3SE +/- 0.002, N = 3SE +/- 0.005, N = 30.8300.8250.8311. (F9X) gfortran options: -O2 -frecursive -m64 -fopenmp -msse3 -mssse3 -msse4.1 -mavx -mavx2 -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Faiss

Test: bench_polysemous_sift1m - Polysemous 46

OpenBenchmarking.orgms per query, Fewer Is BetterFaiss 1.7.4Test: bench_polysemous_sift1m - Polysemous 46abc0.13770.27540.41310.55080.6885SE +/- 0.000, N = 3SE +/- 0.003, N = 3SE +/- 0.004, N = 30.6120.6100.6121. (F9X) gfortran options: -O2 -frecursive -m64 -fopenmp -msse3 -mssse3 -msse4.1 -mavx -mavx2 -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Faiss

Test: bench_polysemous_sift1m - Polysemous 42

OpenBenchmarking.orgms per query, Fewer Is BetterFaiss 1.7.4Test: bench_polysemous_sift1m - Polysemous 42abc0.11810.23620.35430.47240.5905SE +/- 0.001, N = 3SE +/- 0.003, N = 3SE +/- 0.004, N = 30.5250.5220.5251. (F9X) gfortran options: -O2 -frecursive -m64 -fopenmp -msse3 -mssse3 -msse4.1 -mavx -mavx2 -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Faiss

Test: bench_polysemous_sift1m - Polysemous 38

OpenBenchmarking.orgms per query, Fewer Is BetterFaiss 1.7.4Test: bench_polysemous_sift1m - Polysemous 38abc0.11160.22320.33480.44640.558SE +/- 0.000, N = 3SE +/- 0.003, N = 3SE +/- 0.003, N = 30.4950.4940.4961. (F9X) gfortran options: -O2 -frecursive -m64 -fopenmp -msse3 -mssse3 -msse4.1 -mavx -mavx2 -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Faiss

Test: bench_polysemous_sift1m - Polysemous 34

OpenBenchmarking.orgms per query, Fewer Is BetterFaiss 1.7.4Test: bench_polysemous_sift1m - Polysemous 34abc0.10940.21880.32820.43760.547SE +/- 0.000, N = 3SE +/- 0.003, N = 3SE +/- 0.003, N = 30.4850.4850.4861. (F9X) gfortran options: -O2 -frecursive -m64 -fopenmp -msse3 -mssse3 -msse4.1 -mavx -mavx2 -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Faiss

Test: bench_polysemous_sift1m - Polysemous 30

OpenBenchmarking.orgms per query, Fewer Is BetterFaiss 1.7.4Test: bench_polysemous_sift1m - Polysemous 30abc0.10870.21740.32610.43480.5435SE +/- 0.001, N = 3SE +/- 0.003, N = 3SE +/- 0.003, N = 30.4820.4810.4831. (F9X) gfortran options: -O2 -frecursive -m64 -fopenmp -msse3 -mssse3 -msse4.1 -mavx -mavx2 -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Intel TensorFlow

Model: resnet50_fp32_pretrained_model - Batch Size: 1

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 1abc306090120150SE +/- 0.67, N = 3SE +/- 0.56, N = 3SE +/- 1.27, N = 3139.05139.52135.89

Intel TensorFlow

Model: resnet50_fp32_pretrained_model - Batch Size: 1

OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 1abc246810SE +/- 0.034, N = 3SE +/- 0.029, N = 3SE +/- 0.068, N = 37.1927.1687.360

Intel TensorFlow

Model: resnet50_int8_pretrained_model - Batch Size: 1

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 1abc120240360480600SE +/- 5.71, N = 3SE +/- 5.31, N = 3SE +/- 11.17, N = 15547.78503.50523.17

Intel TensorFlow

Model: resnet50_int8_pretrained_model - Batch Size: 1

OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 1abc0.44690.89381.34071.78762.2345SE +/- 0.019, N = 3SE +/- 0.021, N = 3SE +/- 0.041, N = 151.8261.9861.924

Intel TensorFlow

Model: resnet50_fp32_pretrained_model - Batch Size: 16

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 16abc60120180240300SE +/- 3.82, N = 3SE +/- 1.19, N = 3SE +/- 0.86, N = 3264.85267.53267.66

Intel TensorFlow

Model: resnet50_fp32_pretrained_model - Batch Size: 32

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 32abc60120180240300SE +/- 0.79, N = 3SE +/- 0.40, N = 3SE +/- 0.53, N = 3252.89253.60252.68

Intel TensorFlow

Model: resnet50_fp32_pretrained_model - Batch Size: 64

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 64abc50100150200250SE +/- 0.80, N = 3SE +/- 0.54, N = 3SE +/- 0.58, N = 3247.81247.96248.70

Intel TensorFlow

Model: resnet50_fp32_pretrained_model - Batch Size: 96

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 96abc50100150200250SE +/- 0.71, N = 3SE +/- 1.07, N = 3SE +/- 0.56, N = 3245.03246.19245.61

Intel TensorFlow

Model: resnet50_int8_pretrained_model - Batch Size: 16

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 16abc2004006008001000SE +/- 3.01, N = 3SE +/- 2.79, N = 3SE +/- 4.43, N = 3800.10798.92793.97

Intel TensorFlow

Model: resnet50_int8_pretrained_model - Batch Size: 32

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 32abc2004006008001000SE +/- 2.87, N = 3SE +/- 1.36, N = 3SE +/- 2.90, N = 3824.03825.34830.17

Intel TensorFlow

Model: resnet50_int8_pretrained_model - Batch Size: 64

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 64abc2004006008001000SE +/- 1.51, N = 3SE +/- 0.81, N = 3SE +/- 2.49, N = 3830.21837.08833.73

Intel TensorFlow

Model: resnet50_int8_pretrained_model - Batch Size: 96

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 96abc2004006008001000SE +/- 1.57, N = 3SE +/- 1.99, N = 3SE +/- 0.36, N = 3803.35803.69805.52

Intel TensorFlow

Model: resnet50_fp32_pretrained_model - Batch Size: 256

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 256abc50100150200250SE +/- 0.19, N = 3SE +/- 0.16, N = 3SE +/- 0.23, N = 3247.46247.09247.45

Intel TensorFlow

Model: resnet50_fp32_pretrained_model - Batch Size: 512

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_fp32_pretrained_model - Batch Size: 512abc50100150200250SE +/- 0.07, N = 3SE +/- 0.12, N = 3SE +/- 0.10, N = 3249.62249.92249.76

Intel TensorFlow

Model: resnet50_int8_pretrained_model - Batch Size: 256

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 256abc2004006008001000SE +/- 1.40, N = 3SE +/- 2.42, N = 3SE +/- 0.79, N = 3779.33779.24780.45

Intel TensorFlow

Model: resnet50_int8_pretrained_model - Batch Size: 512

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: resnet50_int8_pretrained_model - Batch Size: 512abc2004006008001000SE +/- 0.87, N = 3SE +/- 0.47, N = 3SE +/- 0.23, N = 3774.18775.26775.56

Intel TensorFlow

Model: inceptionv4_fp32_pretrained_model - Batch Size: 1

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 1abc1428425670SE +/- 0.31, N = 3SE +/- 0.59, N = 3SE +/- 1.97, N = 1561.6561.0057.42

Intel TensorFlow

Model: inceptionv4_fp32_pretrained_model - Batch Size: 1

OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 1abc48121620SE +/- 0.04, N = 3SE +/- 0.25, N = 3SE +/- 0.03, N = 1517.0717.3017.20

Intel TensorFlow

Model: inceptionv4_int8_pretrained_model - Batch Size: 1

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 1abc20406080100SE +/- 0.54, N = 3SE +/- 0.36, N = 3SE +/- 3.79, N = 12104.96105.5099.00

Intel TensorFlow

Model: inceptionv4_int8_pretrained_model - Batch Size: 1

OpenBenchmarking.orgms, Fewer Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 1abc3691215SE +/- 0.06, N = 3SE +/- 0.05, N = 3SE +/- 0.06, N = 1210.5610.3510.45

Intel TensorFlow

Model: mobilenetv1_fp32_pretrained_model - Batch Size: 1

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 1abc30060090012001500SE +/- 2.41, N = 3SE +/- 3.66, N = 3SE +/- 0.80, N = 31591.731573.541606.41

Intel TensorFlow

Model: mobilenetv1_int8_pretrained_model - Batch Size: 1

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 1abc10002000300040005000SE +/- 2.14, N = 3SE +/- 2.97, N = 3SE +/- 1.54, N = 34525.904539.904528.32

Intel TensorFlow

Model: inceptionv4_fp32_pretrained_model - Batch Size: 16

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 16abc20406080100SE +/- 0.15, N = 3SE +/- 0.74, N = 3SE +/- 0.63, N = 383.6383.2582.65

Intel TensorFlow

Model: inceptionv4_fp32_pretrained_model - Batch Size: 32

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 32abc20406080100SE +/- 0.10, N = 3SE +/- 0.26, N = 3SE +/- 0.41, N = 380.4280.5680.28

Intel TensorFlow

Model: inceptionv4_fp32_pretrained_model - Batch Size: 64

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 64abc20406080100SE +/- 0.16, N = 3SE +/- 0.10, N = 3SE +/- 0.18, N = 379.1579.5079.00

Intel TensorFlow

Model: inceptionv4_fp32_pretrained_model - Batch Size: 96

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 96abc20406080100SE +/- 0.14, N = 3SE +/- 0.09, N = 3SE +/- 0.16, N = 379.0579.4478.91

Intel TensorFlow

Model: inceptionv4_int8_pretrained_model - Batch Size: 16

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 16abc70140210280350SE +/- 5.23, N = 15SE +/- 4.04, N = 15SE +/- 5.26, N = 12311.78319.25312.28

Intel TensorFlow

Model: inceptionv4_int8_pretrained_model - Batch Size: 32

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 32abc70140210280350SE +/- 3.42, N = 3SE +/- 0.69, N = 3SE +/- 3.93, N = 3312.25315.79311.68

Intel TensorFlow

Model: inceptionv4_int8_pretrained_model - Batch Size: 64

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 64abc70140210280350SE +/- 0.71, N = 3SE +/- 2.44, N = 3SE +/- 1.33, N = 3309.86309.26314.14

Intel TensorFlow

Model: inceptionv4_int8_pretrained_model - Batch Size: 96

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 96abc70140210280350SE +/- 2.02, N = 3SE +/- 1.90, N = 3SE +/- 1.86, N = 3300.91303.13297.39

Intel TensorFlow

Model: mobilenetv1_fp32_pretrained_model - Batch Size: 16

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 16abc2004006008001000SE +/- 0.39, N = 3SE +/- 0.41, N = 3SE +/- 0.62, N = 31107.681108.831107.45

Intel TensorFlow

Model: mobilenetv1_fp32_pretrained_model - Batch Size: 32

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 32abc2004006008001000SE +/- 0.85, N = 3SE +/- 1.16, N = 3SE +/- 0.84, N = 31047.321047.981048.20

Intel TensorFlow

Model: mobilenetv1_fp32_pretrained_model - Batch Size: 64

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 64abc2004006008001000SE +/- 0.93, N = 3SE +/- 0.28, N = 3SE +/- 0.12, N = 3952.20951.88951.85

Intel TensorFlow

Model: mobilenetv1_fp32_pretrained_model - Batch Size: 96

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 96abc2004006008001000SE +/- 0.59, N = 3SE +/- 0.57, N = 3SE +/- 0.63, N = 3914.91915.83915.29

Intel TensorFlow

Model: mobilenetv1_int8_pretrained_model - Batch Size: 16

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 16abc10002000300040005000SE +/- 13.68, N = 3SE +/- 10.16, N = 3SE +/- 30.59, N = 34671.254689.944667.75

Intel TensorFlow

Model: mobilenetv1_int8_pretrained_model - Batch Size: 32

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 32abc11002200330044005500SE +/- 5.24, N = 3SE +/- 21.06, N = 3SE +/- 10.06, N = 35094.815078.875092.76

Intel TensorFlow

Model: mobilenetv1_int8_pretrained_model - Batch Size: 64

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 64abc9001800270036004500SE +/- 0.17, N = 3SE +/- 2.46, N = 3SE +/- 4.71, N = 34327.784330.684334.43

Intel TensorFlow

Model: mobilenetv1_int8_pretrained_model - Batch Size: 96

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 96abc9001800270036004500SE +/- 8.18, N = 3SE +/- 1.91, N = 3SE +/- 8.05, N = 34293.104285.764278.58

Intel TensorFlow

Model: inceptionv4_fp32_pretrained_model - Batch Size: 256

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 256abc20406080100SE +/- 0.15, N = 3SE +/- 0.09, N = 3SE +/- 0.04, N = 380.3680.4080.20

Intel TensorFlow

Model: inceptionv4_fp32_pretrained_model - Batch Size: 512

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_fp32_pretrained_model - Batch Size: 512abc20406080100SE +/- 0.06, N = 3SE +/- 0.11, N = 3SE +/- 0.06, N = 381.0481.0780.95

Intel TensorFlow

Model: inceptionv4_int8_pretrained_model - Batch Size: 256

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 256abc60120180240300SE +/- 0.52, N = 3SE +/- 0.61, N = 3SE +/- 0.38, N = 3288.45289.43287.48

Intel TensorFlow

Model: inceptionv4_int8_pretrained_model - Batch Size: 512

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: inceptionv4_int8_pretrained_model - Batch Size: 512abc60120180240300SE +/- 0.57, N = 3SE +/- 0.10, N = 3SE +/- 0.94, N = 3290.66288.25288.82

Intel TensorFlow

Model: mobilenetv1_fp32_pretrained_model - Batch Size: 256

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 256abc2004006008001000SE +/- 0.47, N = 3SE +/- 0.10, N = 3SE +/- 1.00, N = 3867.02867.13866.92

Intel TensorFlow

Model: mobilenetv1_fp32_pretrained_model - Batch Size: 512

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_fp32_pretrained_model - Batch Size: 512ac2004006008001000SE +/- 0.07, N = 3SE +/- 0.15, N = 3856.50855.61

Intel TensorFlow

Model: mobilenetv1_int8_pretrained_model - Batch Size: 256

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 256ac7001400210028003500SE +/- 2.91, N = 3SE +/- 3.88, N = 33482.013482.91

Intel TensorFlow

Model: mobilenetv1_int8_pretrained_model - Batch Size: 512

OpenBenchmarking.orgimages/sec, More Is BetterIntel TensorFlow 2.12Model: mobilenetv1_int8_pretrained_model - Batch Size: 512ac7001400210028003500SE +/- 1.83, N = 3SE +/- 1.99, N = 33263.943272.66

AMD ZenDNN TensorFlow

Model: tf_resnetv1_50_imagenet_224_224_6.97G_1.1_Z4.0 - Batch Size: 1

OpenBenchmarking.orgms, Fewer Is BetterAMD ZenDNN TensorFlow 2.10 ZenDNN 4.0Model: tf_resnetv1_50_imagenet_224_224_6.97G_1.1_Z4.0 - Batch Size: 1ac246810SE +/- 0.015, N = 3SE +/- 0.051, N = 38.0708.122

AMD ZenDNN TensorFlow

Model: tf_inceptionv4_imagenet_299_299_24.55G_1.1_Z4.0 - Batch Size: 1

OpenBenchmarking.orgms, Fewer Is BetterAMD ZenDNN TensorFlow 2.10 ZenDNN 4.0Model: tf_inceptionv4_imagenet_299_299_24.55G_1.1_Z4.0 - Batch Size: 1ac612182430SE +/- 0.01, N = 3SE +/- 0.02, N = 323.1623.13

AMD ZenDNN TensorFlow

Model: tf_resnetv1_50_imagenet_224_224_6.97G_1.1_Z4.0 - Batch Size: 16

OpenBenchmarking.orgimages/sec, More Is BetterAMD ZenDNN TensorFlow 2.10 ZenDNN 4.0Model: tf_resnetv1_50_imagenet_224_224_6.97G_1.1_Z4.0 - Batch Size: 16ac50100150200250SE +/- 0.17, N = 3SE +/- 0.45, N = 3241.34240.72

AMD ZenDNN TensorFlow

Model: tf_resnetv1_50_imagenet_224_224_6.97G_1.1_Z4.0 - Batch Size: 32

OpenBenchmarking.orgimages/sec, More Is BetterAMD ZenDNN TensorFlow 2.10 ZenDNN 4.0Model: tf_resnetv1_50_imagenet_224_224_6.97G_1.1_Z4.0 - Batch Size: 32ac50100150200250SE +/- 0.13, N = 3SE +/- 1.39, N = 3228.35226.86

AMD ZenDNN TensorFlow

Model: tf_resnetv1_50_imagenet_224_224_6.97G_1.1_Z4.0 - Batch Size: 64

OpenBenchmarking.orgimages/sec, More Is BetterAMD ZenDNN TensorFlow 2.10 ZenDNN 4.0Model: tf_resnetv1_50_imagenet_224_224_6.97G_1.1_Z4.0 - Batch Size: 64ac50100150200250SE +/- 0.32, N = 3SE +/- 1.23, N = 3225.81226.03

AMD ZenDNN TensorFlow

Model: tf_resnetv1_50_imagenet_224_224_6.97G_1.1_Z4.0 - Batch Size: 96

OpenBenchmarking.orgimages/sec, More Is BetterAMD ZenDNN TensorFlow 2.10 ZenDNN 4.0Model: tf_resnetv1_50_imagenet_224_224_6.97G_1.1_Z4.0 - Batch Size: 96ac50100150200250SE +/- 1.65, N = 3SE +/- 0.11, N = 3228.11230.14

AMD ZenDNN TensorFlow

Model: tf_inceptionv4_imagenet_299_299_24.55G_1.1_Z4.0 - Batch Size: 16

OpenBenchmarking.orgimages/sec, More Is BetterAMD ZenDNN TensorFlow 2.10 ZenDNN 4.0Model: tf_inceptionv4_imagenet_299_299_24.55G_1.1_Z4.0 - Batch Size: 16ac1428425670SE +/- 0.01, N = 3SE +/- 0.03, N = 362.7862.79

AMD ZenDNN TensorFlow

Model: tf_inceptionv4_imagenet_299_299_24.55G_1.1_Z4.0 - Batch Size: 32

OpenBenchmarking.orgimages/sec, More Is BetterAMD ZenDNN TensorFlow 2.10 ZenDNN 4.0Model: tf_inceptionv4_imagenet_299_299_24.55G_1.1_Z4.0 - Batch Size: 32ac1428425670SE +/- 0.11, N = 3SE +/- 0.06, N = 362.8262.91

AMD ZenDNN TensorFlow

Model: tf_inceptionv4_imagenet_299_299_24.55G_1.1_Z4.0 - Batch Size: 64

OpenBenchmarking.orgimages/sec, More Is BetterAMD ZenDNN TensorFlow 2.10 ZenDNN 4.0Model: tf_inceptionv4_imagenet_299_299_24.55G_1.1_Z4.0 - Batch Size: 64ac1428425670SE +/- 0.12, N = 3SE +/- 0.07, N = 364.6364.61

AMD ZenDNN TensorFlow

Model: tf_inceptionv4_imagenet_299_299_24.55G_1.1_Z4.0 - Batch Size: 96

OpenBenchmarking.orgimages/sec, More Is BetterAMD ZenDNN TensorFlow 2.10 ZenDNN 4.0Model: tf_inceptionv4_imagenet_299_299_24.55G_1.1_Z4.0 - Batch Size: 96ac1530456075SE +/- 0.06, N = 3SE +/- 0.19, N = 365.8665.59

AMD ZenDNN TensorFlow

Model: tf_resnetv1_50_imagenet_224_224_6.97G_1.1_Z4.0 - Batch Size: 256

OpenBenchmarking.orgimages/sec, More Is BetterAMD ZenDNN TensorFlow 2.10 ZenDNN 4.0Model: tf_resnetv1_50_imagenet_224_224_6.97G_1.1_Z4.0 - Batch Size: 256ac50100150200250SE +/- 0.86, N = 3SE +/- 0.13, N = 3238.11239.23

AMD ZenDNN TensorFlow

Model: tf_resnetv1_50_imagenet_224_224_6.97G_1.1_Z4.0 - Batch Size: 512

OpenBenchmarking.orgimages/sec, More Is BetterAMD ZenDNN TensorFlow 2.10 ZenDNN 4.0Model: tf_resnetv1_50_imagenet_224_224_6.97G_1.1_Z4.0 - Batch Size: 512ac50100150200250SE +/- 0.55, N = 3SE +/- 0.21, N = 3242.65243.29

AMD ZenDNN TensorFlow

Model: tf_inceptionv4_imagenet_299_299_24.55G_1.1_Z4.0 - Batch Size: 256

OpenBenchmarking.orgimages/sec, More Is BetterAMD ZenDNN TensorFlow 2.10 ZenDNN 4.0Model: tf_inceptionv4_imagenet_299_299_24.55G_1.1_Z4.0 - Batch Size: 256ac1530456075SE +/- 0.29, N = 3SE +/- 0.04, N = 368.2468.42

AMD ZenDNN TensorFlow

Model: tf_inceptionv4_imagenet_299_299_24.55G_1.1_Z4.0 - Batch Size: 512

OpenBenchmarking.orgimages/sec, More Is BetterAMD ZenDNN TensorFlow 2.10 ZenDNN 4.0Model: tf_inceptionv4_imagenet_299_299_24.55G_1.1_Z4.0 - Batch Size: 512ac1632486480SE +/- 0.01, N = 3SE +/- 0.08, N = 370.3670.06

AMD ZenDNN TensorFlow

Model: tf_mobilenetv1_1.0_imagenet_224_224_1.14G_1.1_Z4.0 - Batch Size: 1

OpenBenchmarking.orgms, Fewer Is BetterAMD ZenDNN TensorFlow 2.10 ZenDNN 4.0Model: tf_mobilenetv1_1.0_imagenet_224_224_1.14G_1.1_Z4.0 - Batch Size: 1ac0.35690.71381.07071.42761.7845SE +/- 0.021, N = 15SE +/- 0.020, N = 151.5861.573

AMD ZenDNN TensorFlow

Model: tf_mobilenetv1_1.0_imagenet_224_224_1.14G_1.1_Z4.0 - Batch Size: 16

OpenBenchmarking.orgimages/sec, More Is BetterAMD ZenDNN TensorFlow 2.10 ZenDNN 4.0Model: tf_mobilenetv1_1.0_imagenet_224_224_1.14G_1.1_Z4.0 - Batch Size: 16ac2004006008001000SE +/- 1.60, N = 3SE +/- 1.06, N = 3953.78953.58

AMD ZenDNN TensorFlow

Model: tf_mobilenetv1_1.0_imagenet_224_224_1.14G_1.1_Z4.0 - Batch Size: 32

OpenBenchmarking.orgimages/sec, More Is BetterAMD ZenDNN TensorFlow 2.10 ZenDNN 4.0Model: tf_mobilenetv1_1.0_imagenet_224_224_1.14G_1.1_Z4.0 - Batch Size: 32ac170340510680850SE +/- 0.44, N = 3SE +/- 0.39, N = 3770.81768.31

AMD ZenDNN TensorFlow

Model: tf_mobilenetv1_1.0_imagenet_224_224_1.14G_1.1_Z4.0 - Batch Size: 64

OpenBenchmarking.orgimages/sec, More Is BetterAMD ZenDNN TensorFlow 2.10 ZenDNN 4.0Model: tf_mobilenetv1_1.0_imagenet_224_224_1.14G_1.1_Z4.0 - Batch Size: 64ac150300450600750SE +/- 2.05, N = 3SE +/- 1.36, N = 3715.78717.29

AMD ZenDNN TensorFlow

Model: tf_mobilenetv1_1.0_imagenet_224_224_1.14G_1.1_Z4.0 - Batch Size: 96

OpenBenchmarking.orgimages/sec, More Is BetterAMD ZenDNN TensorFlow 2.10 ZenDNN 4.0Model: tf_mobilenetv1_1.0_imagenet_224_224_1.14G_1.1_Z4.0 - Batch Size: 96ac150300450600750SE +/- 0.52, N = 3SE +/- 0.56, N = 3691.01690.86

AMD ZenDNN TensorFlow

Model: tf_mobilenetv1_1.0_imagenet_224_224_1.14G_1.1_Z4.0 - Batch Size: 256

OpenBenchmarking.orgimages/sec, More Is BetterAMD ZenDNN TensorFlow 2.10 ZenDNN 4.0Model: tf_mobilenetv1_1.0_imagenet_224_224_1.14G_1.1_Z4.0 - Batch Size: 256ac140280420560700SE +/- 0.43, N = 3SE +/- 0.46, N = 3650.17650.04

AMD ZenDNN TensorFlow

Model: tf_mobilenetv1_1.0_imagenet_224_224_1.14G_1.1_Z4.0 - Batch Size: 512

OpenBenchmarking.orgimages/sec, More Is BetterAMD ZenDNN TensorFlow 2.10 ZenDNN 4.0Model: tf_mobilenetv1_1.0_imagenet_224_224_1.14G_1.1_Z4.0 - Batch Size: 512ac140280420560700SE +/- 0.23, N = 3SE +/- 0.86, N = 3643.44642.22


Phoronix Test Suite v10.8.4