10980XE onednn onnx Suite
1.0.0
System
Test suite extracted from 10980XE onednn onnx.
pts/fast-cli-1.0.0
Internet Download Speed
pts/fast-cli-1.0.0
Internet Upload Speed
pts/fast-cli-1.0.0
Internet Latency
pts/fast-cli-1.0.0
Internet Loaded Latency (Bufferbloat)
pts/speedtest-cli-1.0.0
Internet Download Speed
pts/speedtest-cli-1.0.0
Internet Upload Speed
pts/speedtest-cli-1.0.0
Internet Latency
pts/perf-bench-1.0.4
epoll wait -r 30
Benchmark: Epoll Wait
pts/perf-bench-1.0.4
futex hash -r 30 -s
Benchmark: Futex Hash
pts/perf-bench-1.0.4
mem memcpy -l 100000 -s 1MB
Benchmark: Memcpy 1MB
pts/perf-bench-1.0.4
mem memset -l 100000 -s 1MB
Benchmark: Memset 1MB
pts/perf-bench-1.0.4
sched pipe -l 5000000
Benchmark: Sched Pipe
pts/perf-bench-1.0.4
futex lock-pi -r 30 -s
Benchmark: Futex Lock-Pi
pts/perf-bench-1.0.4
syscall basic -l 100000000
Benchmark: Syscall Basic
pts/onednn-1.8.0
--ip --batch=inputs/ip/shapes_1d --cfg=f32 --engine=cpu
Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU
pts/onednn-1.8.0
--ip --batch=inputs/ip/shapes_3d --cfg=f32 --engine=cpu
Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU
pts/onednn-1.8.0
--ip --batch=inputs/ip/shapes_1d --cfg=u8s8f32 --engine=cpu
Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU
pts/onednn-1.8.0
--ip --batch=inputs/ip/shapes_3d --cfg=u8s8f32 --engine=cpu
Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU
pts/onednn-1.8.0
--ip --batch=inputs/ip/shapes_1d --cfg=bf16bf16bf16 --engine=cpu
Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU
pts/onednn-1.8.0
--ip --batch=inputs/ip/shapes_3d --cfg=bf16bf16bf16 --engine=cpu
Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU
pts/onednn-1.8.0
--conv --batch=inputs/conv/shapes_auto --cfg=f32 --engine=cpu
Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU
pts/onednn-1.8.0
--deconv --batch=inputs/deconv/shapes_1d --cfg=f32 --engine=cpu
Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU
pts/onednn-1.8.0
--deconv --batch=inputs/deconv/shapes_3d --cfg=f32 --engine=cpu
Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU
pts/onednn-1.8.0
--conv --batch=inputs/conv/shapes_auto --cfg=u8s8f32 --engine=cpu
Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU
pts/onednn-1.8.0
--deconv --batch=inputs/deconv/shapes_1d --cfg=u8s8f32 --engine=cpu
Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU
pts/onednn-1.8.0
--deconv --batch=inputs/deconv/shapes_3d --cfg=u8s8f32 --engine=cpu
Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU
pts/onednn-1.8.0
--rnn --batch=inputs/rnn/perf_rnn_training --cfg=f32 --engine=cpu
Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU
pts/onednn-1.8.0
--rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=f32 --engine=cpu
Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU
pts/onednn-1.8.0
--rnn --batch=inputs/rnn/perf_rnn_training --cfg=u8s8f32 --engine=cpu
Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU
pts/onednn-1.8.0
--conv --batch=inputs/conv/shapes_auto --cfg=bf16bf16bf16 --engine=cpu
Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU
pts/onednn-1.8.0
--deconv --batch=inputs/deconv/shapes_1d --cfg=bf16bf16bf16 --engine=cpu
Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU
pts/onednn-1.8.0
--deconv --batch=inputs/deconv/shapes_3d --cfg=bf16bf16bf16 --engine=cpu
Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU
pts/onednn-1.8.0
--rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=u8s8f32 --engine=cpu
Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU
pts/onednn-1.8.0
--matmul --batch=inputs/matmul/shapes_transformer --cfg=f32 --engine=cpu
Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU
pts/onednn-1.8.0
--rnn --batch=inputs/rnn/perf_rnn_training --cfg=bf16bf16bf16 --engine=cpu
Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU
pts/onednn-1.8.0
--rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=bf16bf16bf16 --engine=cpu
Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU
pts/onednn-1.8.0
--matmul --batch=inputs/matmul/shapes_transformer --cfg=u8s8f32 --engine=cpu
Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU
pts/onednn-1.8.0
--matmul --batch=inputs/matmul/shapes_transformer --cfg=bf16bf16bf16 --engine=cpu
Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU
pts/java-jmh-1.0.1
Throughput
pts/onnx-1.5.0
GPT2/model.onnx -e cpu -P
Model: GPT-2 - Device: CPU - Executor: Parallel
pts/onnx-1.5.0
GPT2/model.onnx -e cpu
Model: GPT-2 - Device: CPU - Executor: Standard
pts/onnx-1.5.0
yolov4/yolov4.onnx -e cpu -P
Model: yolov4 - Device: CPU - Executor: Parallel
pts/onnx-1.5.0
yolov4/yolov4.onnx -e cpu
Model: yolov4 - Device: CPU - Executor: Standard
pts/onnx-1.5.0
bertsquad-12/bertsquad-12.onnx -e cpu -P
Model: bertsquad-12 - Device: CPU - Executor: Parallel
pts/onnx-1.5.0
bertsquad-12/bertsquad-12.onnx -e cpu
Model: bertsquad-12 - Device: CPU - Executor: Standard
pts/onnx-1.5.0
fcn-resnet101-11/model.onnx -e cpu -P
Model: fcn-resnet101-11 - Device: CPU - Executor: Parallel
pts/onnx-1.5.0
fcn-resnet101-11/model.onnx -e cpu
Model: fcn-resnet101-11 - Device: CPU - Executor: Standard
pts/onnx-1.5.0
resnet100/resnet100.onnx -e cpu -P
Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallel
pts/onnx-1.5.0
resnet100/resnet100.onnx -e cpu
Model: ArcFace ResNet-100 - Device: CPU - Executor: Standard
pts/onnx-1.5.0
super_resolution/super_resolution.onnx -e cpu -P
Model: super-resolution-10 - Device: CPU - Executor: Parallel
pts/onnx-1.5.0
super_resolution/super_resolution.onnx -e cpu
Model: super-resolution-10 - Device: CPU - Executor: Standard