m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2

AMD Ryzen 9 7940HS testing with a Win element M600 (SR500P03_P5C2V07 BIOS) and AMD Phoenix1 16GB on EndeavourOS rolling via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2309028-NE-M6007940H04&grt.

m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2ProcessorMotherboardChipsetMemoryDiskGraphicsAudioMonitorNetworkOSKernelDesktopDisplay ServerOpenGLCompilerFile-SystemScreen Resolutionm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2AMD Ryzen 9 7940HS @ 4.00GHz (8 Cores / 16 Threads)Win element M600 (SR500P03_P5C2V07 BIOS)AMD Device 14e880GBWestern Digital WD_BLACK SN850X 2000GBAMD Phoenix1 16GBAMD Rembrandt Radeon HD AudioDELL S3422DW2 x Realtek RTL8125 2.5GbE + Intel Wi-Fi 6 AX200EndeavourOS rolling6.4.12-arch1-1 (x86_64)Xfce 4.18X Server 1.21.1.84.6 Mesa 23.1.6-arch1.4 (LLVM 16.0.6 DRM 3.52)GCC 13.2.1 20230801ext43440x1440OpenBenchmarking.org- Transparent Huge Pages: always- --disable-libssp --disable-libstdcxx-pch --disable-werror --enable-__cxa_atexit --enable-bootstrap --enable-cet=auto --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-default-ssp --enable-gnu-indirect-function --enable-gnu-unique-object --enable-languages=ada,c,c++,d,fortran,go,lto,objc,obj-c++ --enable-libstdcxx-backtrace --enable-link-serialization=1 --enable-lto --enable-multilib --enable-plugin --enable-shared --enable-threads=posix --mandir=/usr/share/man --with-build-config=bootstrap-lto --with-linker-hash-style=gnu - Scaling Governor: acpi-cpufreq performance (Boost: Enabled) - CPU Microcode: 0xa704101- GLAMOR - BAR1 / Visible vRAM Size: 16384 MB- Python 3.11.5- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Not affected + spec_rstack_overflow: Mitigation of safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2deepspeech: CPUncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mncnn: CPU - vision_transformerncnn: CPU - FastestDetncnn: Vulkan GPU - mobilenetncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - googlenetncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - vision_transformerncnn: Vulkan GPU - FastestDetnumenta-nab: KNN CADnumenta-nab: Relative Entropynumenta-nab: Windowed Gaussiannumenta-nab: Earthgecko Skylinenumenta-nab: Bayesian Changepointnumenta-nab: Contextual Anomaly Detector OSEnumpy: onednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: IP Shapes 1D - bf16bf16bf16 - CPUonednn: IP Shapes 3D - bf16bf16bf16 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_1d - bf16bf16bf16 - CPUonednn: Deconvolution Batch shapes_3d - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUrnnoise: scikit-learn: GLMscikit-learn: Treescikit-learn: Lassoscikit-learn: Sparsifyscikit-learn: Plot Wardscikit-learn: MNIST Datasetscikit-learn: SGD Regressionscikit-learn: SGDOneClassSVMscikit-learn: Plot Fast KMeansscikit-learn: Plot Hierarchicalscikit-learn: Plot OMP vs. LARSscikit-learn: Feature Expansionsscikit-learn: TSNE MNIST Datasetscikit-learn: Isotonic / Logisticscikit-learn: Hist Gradient Boostingtensorflow-lite: SqueezeNettensorflow-lite: Inception V4tensorflow-lite: NASNet Mobiletensorflow-lite: Mobilenet Floattensorflow-lite: Mobilenet Quanttensorflow-lite: Inception ResNet V2tnn: CPU - SqueezeNet v1.1m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-246.956128.072.432.231.972.183.300.736.3331.144.644.5210.4513.426.305.1453.792.498.242.472.282.012.213.380.756.6631.204.904.9110.3713.606.105.2254.042.48107.1229.7166.46686.57023.27130.663692.944.546204.619660.7206561.583511.585792.6599210.53306.749074.504929.063920.9121511.078712672.081436.722718.504.213338.551502.759361409.842714.851402.6314.2951649.84244.3203207.145104.14839.65455.463655.585209.017512.427128.612627.893110.195233.6601118.01462.7761921.5128464.87083.461515.793105.7928085.8OpenBenchmarking.org

DeepSpeech

Acceleration: CPU

OpenBenchmarking.orgSeconds, Fewer Is BetterDeepSpeech 0.6Acceleration: CPUm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-21122334455SE +/- 0.02, N = 346.96

NCNN

Target: CPU - Model: mobilenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2246810SE +/- 0.01, N = 38.07MIN: 7.82 / MAX: 11.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU-v2-v2 - Model: mobilenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-20.54681.09361.64042.18722.734SE +/- 0.01, N = 32.43MIN: 2.28 / MAX: 5.611. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU-v3-v3 - Model: mobilenet-v3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-20.50181.00361.50542.00722.509SE +/- 0.01, N = 32.23MIN: 2.13 / MAX: 5.11. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: shufflenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-20.44330.88661.32991.77322.2165SE +/- 0.00, N = 31.97MIN: 1.85 / MAX: 6.581. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: mnasnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-20.49050.9811.47151.9622.4525SE +/- 0.03, N = 32.18MIN: 2.01 / MAX: 5.911. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: efficientnet-b0

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-20.74251.4852.22752.973.7125SE +/- 0.03, N = 33.30MIN: 3.09 / MAX: 6.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: blazeface

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefacem600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-20.16430.32860.49290.65720.8215SE +/- 0.01, N = 30.73MIN: 0.69 / MAX: 3.561. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: googlenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2246810SE +/- 0.02, N = 36.33MIN: 6.06 / MAX: 9.41. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: vgg16

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2714212835SE +/- 0.46, N = 331.14MIN: 30.03 / MAX: 44.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: resnet18

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-21.0442.0883.1324.1765.22SE +/- 0.03, N = 34.64MIN: 4.44 / MAX: 7.491. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: alexnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-21.0172.0343.0514.0685.085SE +/- 0.03, N = 34.52MIN: 4.35 / MAX: 8.371. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: resnet50

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-23691215SE +/- 0.55, N = 310.45MIN: 9.44 / MAX: 15.881. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: yolov4-tiny

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinym600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-23691215SE +/- 0.08, N = 313.42MIN: 12.96 / MAX: 17.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: squeezenet_ssd

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2246810SE +/- 0.27, N = 36.30MIN: 5.74 / MAX: 10.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: regnety_400m

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-21.15652.3133.46954.6265.7825SE +/- 0.01, N = 35.14MIN: 4.92 / MAX: 11.221. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: vision_transformer

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-21224364860SE +/- 0.05, N = 353.79MIN: 51.57 / MAX: 61.951. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: CPU - Model: FastestDet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-20.56031.12061.68092.24122.8015SE +/- 0.04, N = 32.49MIN: 2.33 / MAX: 5.651. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: mobilenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mobilenetm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2246810SE +/- 0.09, N = 38.24MIN: 7.85 / MAX: 12.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-20.55581.11161.66742.22322.779SE +/- 0.03, N = 32.47MIN: 2.28 / MAX: 5.91. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-20.5131.0261.5392.0522.565SE +/- 0.02, N = 32.28MIN: 2.14 / MAX: 5.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: shufflenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: shufflenet-v2m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-20.45230.90461.35691.80922.2615SE +/- 0.01, N = 32.01MIN: 1.9 / MAX: 4.961. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: mnasnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mnasnetm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-20.49730.99461.49191.98922.4865SE +/- 0.02, N = 32.21MIN: 2.08 / MAX: 5.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: efficientnet-b0

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: efficientnet-b0m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-20.76051.5212.28153.0423.8025SE +/- 0.06, N = 33.38MIN: 3.17 / MAX: 6.421. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: blazeface

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: blazefacem600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-20.16880.33760.50640.67520.844SE +/- 0.01, N = 30.75MIN: 0.69 / MAX: 3.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: googlenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: googlenetm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2246810SE +/- 0.07, N = 36.66MIN: 6.35 / MAX: 10.741. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: vgg16

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vgg16m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2714212835SE +/- 0.02, N = 331.20MIN: 30.51 / MAX: 39.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: resnet18

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet18m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-21.10252.2053.30754.415.5125SE +/- 0.03, N = 34.90MIN: 4.68 / MAX: 10.251. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: alexnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: alexnetm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-21.10482.20963.31444.41925.524SE +/- 0.10, N = 34.91MIN: 4.69 / MAX: 11.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: resnet50

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet50m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-23691215SE +/- 0.02, N = 310.37MIN: 9.94 / MAX: 16.031. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: yolov4-tiny

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: yolov4-tinym600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-23691215SE +/- 0.06, N = 313.60MIN: 13.22 / MAX: 18.631. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: squeezenet_ssd

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: squeezenet_ssdm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2246810SE +/- 0.02, N = 36.10MIN: 5.84 / MAX: 14.121. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: regnety_400m

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: regnety_400mm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-21.17452.3493.52354.6985.8725SE +/- 0.05, N = 35.22MIN: 4.91 / MAX: 8.271. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: vision_transformer

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vision_transformerm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-21224364860SE +/- 0.06, N = 354.04MIN: 52.55 / MAX: 64.521. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

NCNN

Target: Vulkan GPU - Model: FastestDet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: FastestDetm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-20.5581.1161.6742.2322.79SE +/- 0.05, N = 32.48MIN: 2.34 / MAX: 61. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

Numenta Anomaly Benchmark

Detector: KNN CAD

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: KNN CADm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-220406080100SE +/- 0.38, N = 3107.12

Numenta Anomaly Benchmark

Detector: Relative Entropy

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Relative Entropym600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-23691215SE +/- 0.090, N = 69.716

Numenta Anomaly Benchmark

Detector: Windowed Gaussian

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Windowed Gaussianm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2246810SE +/- 0.012, N = 36.466

Numenta Anomaly Benchmark

Detector: Earthgecko Skyline

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Earthgecko Skylinem600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-220406080100SE +/- 0.91, N = 1586.57

Numenta Anomaly Benchmark

Detector: Bayesian Changepoint

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Bayesian Changepointm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2612182430SE +/- 0.20, N = 323.27

Numenta Anomaly Benchmark

Detector: Contextual Anomaly Detector OSE

OpenBenchmarking.orgSeconds, Fewer Is BetterNumenta Anomaly Benchmark 1.1Detector: Contextual Anomaly Detector OSEm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2714212835SE +/- 0.17, N = 330.66

Numpy Benchmark

OpenBenchmarking.orgScore, More Is BetterNumpy Benchmarkm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2150300450600750SE +/- 5.43, N = 3692.94

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-21.02292.04583.06874.09165.1145SE +/- 0.04068, N = 34.54620MIN: 3.791. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-21.03942.07883.11824.15765.197SE +/- 0.04248, N = 64.61966MIN: 4.371. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-20.16210.32420.48630.64840.8105SE +/- 0.006523, N = 150.720656MIN: 0.571. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-20.35630.71261.06891.42521.7815SE +/- 0.02084, N = 31.58351MIN: 1.391. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPUm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-20.35680.71361.07041.42721.784SE +/- 0.00921, N = 31.58579MIN: 1.321. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPUm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-20.59851.1971.79552.3942.9925SE +/- 0.01990, N = 32.65992MIN: 2.431. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-23691215SE +/- 0.11, N = 310.53MIN: 8.91. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2246810SE +/- 0.11025, N = 126.74907MIN: 4.31. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-21.01362.02723.04084.05445.068SE +/- 0.13143, N = 154.50492MIN: 4.071. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-23691215SE +/- 0.04724, N = 39.06392MIN: 8.731. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-20.20520.41040.61560.82081.026SE +/- 0.003553, N = 30.912151MIN: 0.821. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-20.24270.48540.72810.97081.2135SE +/- 0.00135, N = 31.07871MIN: 0.961. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-26001200180024003000SE +/- 22.64, N = 82672.08MIN: 2490.391. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-230060090012001500SE +/- 4.68, N = 31436.72MIN: 1361.411. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-26001200180024003000SE +/- 9.78, N = 32718.50MIN: 2636.721. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPUm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-20.9481.8962.8443.7924.74SE +/- 0.06060, N = 34.21333MIN: 3.941. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPUm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2246810SE +/- 0.03672, N = 38.55150MIN: 7.521. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPUm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-20.62091.24181.86272.48363.1045SE +/- 0.00335, N = 32.75936MIN: 2.511. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-230060090012001500SE +/- 17.30, N = 41409.84MIN: 1312.691. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-26001200180024003000SE +/- 5.55, N = 32714.85MIN: 2634.211. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.1Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-230060090012001500SE +/- 2.23, N = 31402.63MIN: 1343.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

RNNoise

OpenBenchmarking.orgSeconds, Fewer Is BetterRNNoise 2020-06-28m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-248121620SE +/- 0.02, N = 314.301. (CC) gcc options: -O2 -pedantic -fvisibility=hidden -lm

Scikit-Learn

Benchmark: GLM

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: GLMm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2400800120016002000SE +/- 2.64, N = 31649.841. (F9X) gfortran options: -O3 -fopenmp -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Scikit-Learn

Benchmark: Tree

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Treem600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-21020304050SE +/- 0.36, N = 1544.321. (F9X) gfortran options: -O3 -fopenmp -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Scikit-Learn

Benchmark: Lasso

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Lassom600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-27001400210028003500SE +/- 12.72, N = 33207.151. (F9X) gfortran options: -O3 -fopenmp -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Scikit-Learn

Benchmark: Sparsify

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Sparsifym600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-220406080100SE +/- 0.18, N = 3104.151. (F9X) gfortran options: -O3 -fopenmp -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Scikit-Learn

Benchmark: Plot Ward

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Wardm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2918273645SE +/- 0.04, N = 339.651. (F9X) gfortran options: -O3 -fopenmp -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Scikit-Learn

Benchmark: MNIST Dataset

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: MNIST Datasetm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-21224364860SE +/- 0.50, N = 355.461. (F9X) gfortran options: -O3 -fopenmp -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Scikit-Learn

Benchmark: SGD Regression

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SGD Regressionm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2140280420560700SE +/- 2.72, N = 3655.591. (F9X) gfortran options: -O3 -fopenmp -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Scikit-Learn

Benchmark: SGDOneClassSVM

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: SGDOneClassSVMm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-250100150200250SE +/- 2.78, N = 3209.021. (F9X) gfortran options: -O3 -fopenmp -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Scikit-Learn

Benchmark: Plot Fast KMeans

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Fast KMeansm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2110220330440550SE +/- 2.66, N = 3512.431. (F9X) gfortran options: -O3 -fopenmp -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Scikit-Learn

Benchmark: Plot Hierarchical

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot Hierarchicalm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2306090120150SE +/- 1.33, N = 3128.611. (F9X) gfortran options: -O3 -fopenmp -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Scikit-Learn

Benchmark: Plot OMP vs. LARS

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Plot OMP vs. LARSm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2140280420560700SE +/- 1.88, N = 3627.891. (F9X) gfortran options: -O3 -fopenmp -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Scikit-Learn

Benchmark: Feature Expansions

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Feature Expansionsm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-220406080100SE +/- 0.08, N = 3110.201. (F9X) gfortran options: -O3 -fopenmp -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Scikit-Learn

Benchmark: TSNE MNIST Dataset

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: TSNE MNIST Datasetm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-250100150200250SE +/- 0.28, N = 3233.661. (F9X) gfortran options: -O3 -fopenmp -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Scikit-Learn

Benchmark: Isotonic / Logistic

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Isotonic / Logisticm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-22004006008001000SE +/- 4.00, N = 31118.011. (F9X) gfortran options: -O3 -fopenmp -fno-tree-vectorize -lm -lpthread -lgfortran -lc

Scikit-Learn

Benchmark: Hist Gradient Boosting

OpenBenchmarking.orgSeconds, Fewer Is BetterScikit-Learn 1.2.2Benchmark: Hist Gradient Boostingm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-21428425670SE +/- 0.08, N = 362.781. (F9X) gfortran options: -O3 -fopenmp -fno-tree-vectorize -lm -lpthread -lgfortran -lc

TensorFlow Lite

Model: SqueezeNet

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: SqueezeNetm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-2400800120016002000SE +/- 5.32, N = 31921.51

TensorFlow Lite

Model: Inception V4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception V4m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-26K12K18K24K30KSE +/- 57.85, N = 328464.8

TensorFlow Lite

Model: NASNet Mobile

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: NASNet Mobilem600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-215003000450060007500SE +/- 47.37, N = 37083.46

TensorFlow Lite

Model: Mobilenet Float

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Floatm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-230060090012001500SE +/- 9.91, N = 31515.79

TensorFlow Lite

Model: Mobilenet Quant

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Mobilenet Quantm600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-27001400210028003500SE +/- 3.46, N = 33105.79

TensorFlow Lite

Model: Inception ResNet V2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterTensorFlow Lite 2022-05-18Model: Inception ResNet V2m600_7940hs-96gb-5600mhz-16gb-igpu-performance-tpd-2tb-sn850x-2023-09-01-26K12K18K24K30KSE +/- 247.53, N = 328085.8


Phoronix Test Suite v10.8.5