pts-hpc

Apple M2 Pro testing with a Apple Mac mini and Apple M2 Pro on macOS 13.5 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2312093-REIN-PTSHPC077
Jump To Table - Results

Statistics

Remove Outliers Before Calculating Averages

Graph Settings

Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
pts-hpc
December 09 2023
  18 Hours, 30 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


pts-hpc - Phoronix Test Suite

pts-hpc

Apple M2 Pro testing with a Apple Mac mini and Apple M2 Pro on macOS 13.5 via the Phoronix Test Suite.

HTML result view exported from: https://openbenchmarking.org/result/2312093-REIN-PTSHPC077&gru.

pts-hpcProcessorMotherboardMemoryDiskGraphicsMonitorOSKernelCompilerFile-SystemScreen Resolutionpts-hpcApple M2 Pro (10 Cores)Apple Mac mini16GB461GBApple M2 ProApple M2 PromacOS 13.522.6.0 (arm64)GCC 14.0.3 + Clang 17.0.6 + LLVM 17.0.6 + Xcode 14.3.1APFSxOpenBenchmarking.org- LDFLAGS=-L/opt/homebrew/opt/llvm/lib CPPFLAGS=-I/opt/homebrew/opt/llvm/include - Python 3.11.5

pts-hpcnamd: ATPase Simulation - 327,506 Atomsonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUmnn: nasnetmnn: mobilenetV3mnn: squeezenetv1.1mnn: resnet-v2-50mnn: SqueezeNetV1.0mnn: MobileNetV2_224mnn: mobilenet-v1-1.0mnn: inception-v3ncnn: CPU - mobilenetncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU-v3-v3 - mobilenet-v3ncnn: CPU - shufflenet-v2ncnn: CPU - mnasnetncnn: CPU - efficientnet-b0ncnn: CPU - blazefacencnn: CPU - googlenetncnn: CPU - vgg16ncnn: CPU - resnet18ncnn: CPU - alexnetncnn: CPU - resnet50ncnn: CPU - yolov4-tinyncnn: CPU - squeezenet_ssdncnn: CPU - regnety_400mncnn: CPU - vision_transformerncnn: CPU - FastestDetncnn: Vulkan GPU - mobilenetncnn: Vulkan GPU-v2-v2 - mobilenet-v2ncnn: Vulkan GPU-v3-v3 - mobilenet-v3ncnn: Vulkan GPU - shufflenet-v2ncnn: Vulkan GPU - mnasnetncnn: Vulkan GPU - efficientnet-b0ncnn: Vulkan GPU - blazefacencnn: Vulkan GPU - googlenetncnn: Vulkan GPU - vgg16ncnn: Vulkan GPU - resnet18ncnn: Vulkan GPU - alexnetncnn: Vulkan GPU - resnet50ncnn: Vulkan GPU - yolov4-tinyncnn: Vulkan GPU - squeezenet_ssdncnn: Vulkan GPU - regnety_400mncnn: Vulkan GPU - vision_transformerncnn: Vulkan GPU - FastestDetmafft: Multiple Sequence Alignment - LSU RNApyhpc: CPU - TensorFlow - 4194304 - Isoneutral Mixingpts-hpc2.36706198.544127.933266.297341.080349.577888.776348.905771.373612.431375.87021603113280621546113281921542513281511.5231.8253.19925.0795.5273.6294.66435.27718.074.643.772.694.737.590.8822.1166.2514.8516.7540.6226.7713.806.441187.732.1118.014.593.732.664.697.510.8721.9165.6014.7216.5940.2726.7113.696.381187.422.118.166OpenBenchmarking.org

NAMD

ATPase Simulation - 327,506 Atoms

OpenBenchmarking.orgdays/ns, Fewer Is BetterNAMD 2.14ATPase Simulation - 327,506 Atomspts-hpc0.53351.0671.60052.1342.6675SE +/- 0.00197, N = 32.37121

oneDNN

Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUpts-hpc4080120160200SE +/- 0.02, N = 3198.54MIN: 196.751. (CXX) g++ options: -O3 -march=native -mcpu=native -fPIC -arch -isysroot

oneDNN

Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUpts-hpc306090120150SE +/- 0.15, N = 3127.93MIN: 127.071. (CXX) g++ options: -O3 -march=native -mcpu=native -fPIC -arch -isysroot

oneDNN

Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUpts-hpc60120180240300SE +/- 0.07, N = 3266.30MIN: 263.271. (CXX) g++ options: -O3 -march=native -mcpu=native -fPIC -arch -isysroot

oneDNN

Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUpts-hpc70140210280350SE +/- 0.55, N = 3341.08MIN: 338.541. (CXX) g++ options: -O3 -march=native -mcpu=native -fPIC -arch -isysroot

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUpts-hpc80160240320400SE +/- 0.66, N = 3349.58MIN: 344.761. (CXX) g++ options: -O3 -march=native -mcpu=native -fPIC -arch -isysroot

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUpts-hpc2004006008001000SE +/- 0.33, N = 3888.78MIN: 885.431. (CXX) g++ options: -O3 -march=native -mcpu=native -fPIC -arch -isysroot

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUpts-hpc80160240320400SE +/- 0.07, N = 3348.91MIN: 347.121. (CXX) g++ options: -O3 -march=native -mcpu=native -fPIC -arch -isysroot

oneDNN

Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUpts-hpc170340510680850SE +/- 0.26, N = 3771.37MIN: 7671. (CXX) g++ options: -O3 -march=native -mcpu=native -fPIC -arch -isysroot

oneDNN

Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUpts-hpc130260390520650SE +/- 0.99, N = 3612.43MIN: 601.381. (CXX) g++ options: -O3 -march=native -mcpu=native -fPIC -arch -isysroot

oneDNN

Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUpts-hpc80160240320400SE +/- 0.24, N = 3375.87MIN: 374.291. (CXX) g++ options: -O3 -march=native -mcpu=native -fPIC -arch -isysroot

oneDNN

Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUpts-hpc50K100K150K200K250KSE +/- 134.13, N = 3216031MIN: 2156641. (CXX) g++ options: -O3 -march=native -mcpu=native -fPIC -arch -isysroot

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUpts-hpc30K60K90K120K150KSE +/- 36.42, N = 3132806MIN: 1326901. (CXX) g++ options: -O3 -march=native -mcpu=native -fPIC -arch -isysroot

oneDNN

Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUpts-hpc50K100K150K200K250KSE +/- 20.22, N = 3215461MIN: 2153231. (CXX) g++ options: -O3 -march=native -mcpu=native -fPIC -arch -isysroot

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUpts-hpc30K60K90K120K150KSE +/- 17.95, N = 3132819MIN: 1327101. (CXX) g++ options: -O3 -march=native -mcpu=native -fPIC -arch -isysroot

oneDNN

Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUpts-hpc50K100K150K200K250KSE +/- 24.27, N = 3215425MIN: 2152541. (CXX) g++ options: -O3 -march=native -mcpu=native -fPIC -arch -isysroot

oneDNN

Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUpts-hpc30K60K90K120K150KSE +/- 13.69, N = 3132815MIN: 1327191. (CXX) g++ options: -O3 -march=native -mcpu=native -fPIC -arch -isysroot

Mobile Neural Network

Model: nasnet

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: nasnetpts-hpc3691215SE +/- 0.13, N = 411.52MIN: 9.02 / MAX: 16.911. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -arch -isysroot

Mobile Neural Network

Model: mobilenetV3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenetV3pts-hpc0.41060.82121.23181.64242.053SE +/- 0.013, N = 41.825MIN: 1.48 / MAX: 4.151. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -arch -isysroot

Mobile Neural Network

Model: squeezenetv1.1

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: squeezenetv1.1pts-hpc0.71981.43962.15942.87923.599SE +/- 0.048, N = 43.199MIN: 2.25 / MAX: 4.681. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -arch -isysroot

Mobile Neural Network

Model: resnet-v2-50

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: resnet-v2-50pts-hpc612182430SE +/- 0.14, N = 425.08MIN: 20.97 / MAX: 29.291. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -arch -isysroot

Mobile Neural Network

Model: SqueezeNetV1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: SqueezeNetV1.0pts-hpc1.24362.48723.73084.97446.218SE +/- 0.049, N = 45.527MIN: 4.29 / MAX: 7.171. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -arch -isysroot

Mobile Neural Network

Model: MobileNetV2_224

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: MobileNetV2_224pts-hpc0.81651.6332.44953.2664.0825SE +/- 0.021, N = 43.629MIN: 2.95 / MAX: 5.881. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -arch -isysroot

Mobile Neural Network

Model: mobilenet-v1-1.0

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: mobilenet-v1-1.0pts-hpc1.04942.09883.14824.19765.247SE +/- 0.066, N = 44.664MIN: 3.82 / MAX: 6.81. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -arch -isysroot

Mobile Neural Network

Model: inception-v3

OpenBenchmarking.orgms, Fewer Is BetterMobile Neural Network 2.1Model: inception-v3pts-hpc816243240SE +/- 0.77, N = 435.28MIN: 29.23 / MAX: 39.491. (CXX) g++ options: -std=c++11 -O3 -fvisibility=hidden -fomit-frame-pointer -fstrict-aliasing -ffunction-sections -fdata-sections -ffast-math -fno-rtti -fno-exceptions -arch -isysroot

NCNN

Target: CPU - Model: mobilenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetpts-hpc48121620SE +/- 0.04, N = 318.07MIN: 17.9 / MAX: 18.91. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: CPU-v2-v2 - Model: mobilenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2pts-hpc1.0442.0883.1324.1765.22SE +/- 0.04, N = 34.64MIN: 4.57 / MAX: 5.171. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: CPU-v3-v3 - Model: mobilenet-v3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3pts-hpc0.84831.69662.54493.39324.2415SE +/- 0.04, N = 33.77MIN: 3.72 / MAX: 4.121. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: CPU - Model: shufflenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2pts-hpc0.60531.21061.81592.42123.0265SE +/- 0.02, N = 32.69MIN: 2.65 / MAX: 3.211. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: CPU - Model: mnasnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetpts-hpc1.06432.12863.19294.25725.3215SE +/- 0.04, N = 34.73MIN: 4.67 / MAX: 5.21. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: CPU - Model: efficientnet-b0

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0pts-hpc246810SE +/- 0.07, N = 37.59MIN: 7.48 / MAX: 8.331. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: CPU - Model: blazeface

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefacepts-hpc0.1980.3960.5940.7920.99SE +/- 0.01, N = 30.88MIN: 0.86 / MAX: 1.011. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: CPU - Model: googlenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetpts-hpc510152025SE +/- 0.18, N = 322.11MIN: 21.7 / MAX: 23.381. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: CPU - Model: vgg16

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16pts-hpc1530456075SE +/- 0.56, N = 366.25MIN: 64.89 / MAX: 68.21. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: CPU - Model: resnet18

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18pts-hpc48121620SE +/- 0.14, N = 314.85MIN: 14.54 / MAX: 15.61. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: CPU - Model: alexnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetpts-hpc48121620SE +/- 0.13, N = 316.75MIN: 16.47 / MAX: 17.641. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: CPU - Model: resnet50

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50pts-hpc918273645SE +/- 0.32, N = 340.62MIN: 39.97 / MAX: 41.551. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: CPU - Model: yolov4-tiny

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinypts-hpc612182430SE +/- 0.04, N = 326.77MIN: 26.45 / MAX: 27.891. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: CPU - Model: squeezenet_ssd

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdpts-hpc48121620SE +/- 0.12, N = 313.80MIN: 13.33 / MAX: 14.611. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: CPU - Model: regnety_400m

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mpts-hpc246810SE +/- 0.06, N = 36.44MIN: 6.35 / MAX: 7.031. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: CPU - Model: vision_transformer

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerpts-hpc30060090012001500SE +/- 0.19, N = 31187.73MIN: 1186.86 / MAX: 1190.111. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: CPU - Model: FastestDet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetpts-hpc0.47480.94961.42441.89922.374SE +/- 0.00, N = 32.11MIN: 2.1 / MAX: 2.341. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: Vulkan GPU - Model: mobilenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mobilenetpts-hpc48121620SE +/- 0.01, N = 318.01MIN: 17.89 / MAX: 19.771. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2pts-hpc1.03282.06563.09844.13125.164SE +/- 0.00, N = 34.59MIN: 4.57 / MAX: 5.111. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3pts-hpc0.83931.67862.51793.35724.1965SE +/- 0.00, N = 33.73MIN: 3.71 / MAX: 4.091. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: Vulkan GPU - Model: shufflenet-v2

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: shufflenet-v2pts-hpc0.59851.1971.79552.3942.9925SE +/- 0.00, N = 32.66MIN: 2.65 / MAX: 2.961. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: Vulkan GPU - Model: mnasnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: mnasnetpts-hpc1.05532.11063.16594.22125.2765SE +/- 0.00, N = 34.69MIN: 4.67 / MAX: 5.071. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: Vulkan GPU - Model: efficientnet-b0

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: efficientnet-b0pts-hpc246810SE +/- 0.00, N = 37.51MIN: 7.48 / MAX: 8.211. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: Vulkan GPU - Model: blazeface

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: blazefacepts-hpc0.19580.39160.58740.78320.979SE +/- 0.00, N = 30.87MIN: 0.86 / MAX: 0.921. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: Vulkan GPU - Model: googlenet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: googlenetpts-hpc510152025SE +/- 0.03, N = 321.91MIN: 21.46 / MAX: 22.991. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: Vulkan GPU - Model: vgg16

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vgg16pts-hpc1530456075SE +/- 0.04, N = 365.60MIN: 64.87 / MAX: 66.511. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: Vulkan GPU - Model: resnet18

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet18pts-hpc48121620SE +/- 0.01, N = 314.72MIN: 14.36 / MAX: 15.981. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: Vulkan GPU - Model: alexnet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: alexnetpts-hpc48121620SE +/- 0.02, N = 316.59MIN: 16.45 / MAX: 17.391. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: Vulkan GPU - Model: resnet50

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: resnet50pts-hpc918273645SE +/- 0.02, N = 340.27MIN: 39.89 / MAX: 41.051. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: Vulkan GPU - Model: yolov4-tiny

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: yolov4-tinypts-hpc612182430SE +/- 0.02, N = 326.71MIN: 26.45 / MAX: 28.291. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: Vulkan GPU - Model: squeezenet_ssd

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: squeezenet_ssdpts-hpc48121620SE +/- 0.01, N = 313.69MIN: 13.56 / MAX: 14.551. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: Vulkan GPU - Model: regnety_400m

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: regnety_400mpts-hpc246810SE +/- 0.00, N = 36.38MIN: 6.35 / MAX: 6.971. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: Vulkan GPU - Model: vision_transformer

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: vision_transformerpts-hpc30060090012001500SE +/- 0.23, N = 31187.42MIN: 1186.29 / MAX: 1189.541. (CXX) g++ options: -O3 -arch -isysroot

NCNN

Target: Vulkan GPU - Model: FastestDet

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: Vulkan GPU - Model: FastestDetpts-hpc0.47480.94961.42441.89922.374SE +/- 0.00, N = 32.11MIN: 2.1 / MAX: 2.211. (CXX) g++ options: -O3 -arch -isysroot

Timed MAFFT Alignment

Multiple Sequence Alignment - LSU RNA

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MAFFT Alignment 7.471Multiple Sequence Alignment - LSU RNApts-hpc246810SE +/- 0.091, N = 38.3151. (CC) gcc options: -std=c99 -O3 -lm -lpthread


Phoronix Test Suite v10.8.4