zn

2 x AMD EPYC 7773X 64-Core testing with a AMD DAYTONA_X (RYM1009B BIOS) and ASPEED on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2305011-NE-ZN838339286
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

Timed Code Compilation 6 Tests
C/C++ Compiler Tests 8 Tests
CPU Massive 10 Tests
Creator Workloads 10 Tests
Cryptography 2 Tests
Database Test Suite 3 Tests
Encoding 4 Tests
Fortran Tests 2 Tests
Game Development 4 Tests
HPC - High Performance Computing 3 Tests
Common Kernel Benchmarks 3 Tests
Machine Learning 2 Tests
Multi-Core 14 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 2 Tests
OpenMPI Tests 2 Tests
Programmer / Developer System Benchmarks 6 Tests
Python Tests 8 Tests
Server 6 Tests
Server CPU Tests 6 Tests
Video Encoding 3 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
April 30 2023
  4 Hours, 57 Minutes
b
April 30 2023
  4 Hours, 22 Minutes
c
April 30 2023
  4 Hours, 54 Minutes
Invert Hiding All Results Option
  4 Hours, 44 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


zn Suite 1.0.0 System Test suite extracted from zn. pts/draco-1.6.0 -i lion.ply Model: Lion pts/draco-1.6.0 -i church.ply Model: Church Facade pts/specfem3d-1.0.0 waterlayered_halfspace Model: Water-layered Halfspace pts/brl-cad-1.4.0 VGR Performance Metric pts/specfem3d-1.0.0 homogeneous_halfspace Model: Homogeneous Halfspace pts/specfem3d-1.0.0 tomographic_model Model: Tomographic Model pts/specfem3d-1.0.0 layered_halfspace Model: Layered Halfspace pts/encode-opus-1.4.0 WAV To Opus Encode pts/intel-tensorflow-1.0.0 mobilenetv1_int8_pretrained_model.pb 960 Model: mobilenetv1_int8_pretrained_model - Batch Size: 960 pts/quantlib-1.1.0 pts/tensorflow-2.1.0 --device cpu --batch_size=16 --model=resnet50 Device: CPU - Batch Size: 16 - Model: ResNet-50 pts/tensorflow-2.1.0 --device cpu --batch_size=32 --model=resnet50 Device: CPU - Batch Size: 32 - Model: ResNet-50 pts/tensorflow-2.1.0 --device cpu --batch_size=64 --model=resnet50 Device: CPU - Batch Size: 64 - Model: ResNet-50 pts/tensorflow-2.1.0 --device cpu --batch_size=256 --model=resnet50 Device: CPU - Batch Size: 256 - Model: ResNet-50 pts/tensorflow-2.1.0 --device cpu --batch_size=512 --model=resnet50 Device: CPU - Batch Size: 512 - Model: ResNet-50 pts/deepsparse-1.3.2 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario async Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario sync Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:nlp/sentiment_analysis/bert-base/pytorch/huggingface/sst2/12layer_pruned90-none --scenario async Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:nlp/sentiment_analysis/bert-base/pytorch/huggingface/sst2/12layer_pruned90-none --scenario sync Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario async Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario sync Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario async Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario sync Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario async Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario sync Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario async Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario sync Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none --scenario async Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none --scenario sync Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --scenario async Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --scenario sync Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario async Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario sync Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream pts/gromacs-1.8.0 mpi-build water-cut1.0_GMX50_bare/1536 Implementation: MPI CPU - Input: water_GMX50_bare pts/build-ffmpeg-6.0.0 Time To Compile pts/john-the-ripper-1.8.0 --format=bcrypt Test: bcrypt pts/john-the-ripper-1.8.0 --format=wpapsk Test: WPA PSK pts/john-the-ripper-1.8.0 --format=bcrypt Test: Blowfish pts/john-the-ripper-1.8.0 --format=HMAC-SHA512 Test: HMAC-SHA512 pts/john-the-ripper-1.8.0 --format=md5crypt Test: MD5 pts/build-llvm-1.5.0 Ninja Build System: Ninja pts/build-llvm-1.5.0 Build System: Unix Makefiles pts/build-linux-kernel-1.15.0 defconfig Build: defconfig pts/build-linux-kernel-1.15.0 allmodconfig Build: allmodconfig pts/svt-av1-2.8.0 --preset 4 -n 160 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 4 - Input: Bosphorus 4K pts/svt-av1-2.8.0 --preset 8 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 8 - Input: Bosphorus 4K pts/svt-av1-2.8.0 --preset 12 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 12 - Input: Bosphorus 4K pts/svt-av1-2.8.0 --preset 13 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 13 - Input: Bosphorus 4K pts/svt-av1-2.8.0 --preset 4 -n 160 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 4 - Input: Bosphorus 1080p pts/svt-av1-2.8.0 --preset 8 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 8 - Input: Bosphorus 1080p pts/svt-av1-2.8.0 --preset 12 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 12 - Input: Bosphorus 1080p pts/svt-av1-2.8.0 --preset 13 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 13 - Input: Bosphorus 1080p pts/deeprec-1.0.2 ple bf16 Model: PLE - Data Type: BF16 pts/blender-3.5.0 -b ../bmw27_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: BMW27 - Compute: CPU-Only pts/blender-3.5.0 -b ../classroom_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Classroom - Compute: CPU-Only pts/blender-3.5.0 -b ../fishy_cat_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Fishy Cat - Compute: CPU-Only pts/blender-3.5.0 -b ../barbershop_interior_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Barbershop - Compute: CPU-Only pts/blender-3.5.0 -b ../pavillon_barcelone_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Pabellon Barcelona - Compute: CPU-Only pts/uvg266-1.0.0 -i Bosphorus_3840x2160.y4m --preset slow Video Input: Bosphorus 4K - Video Preset: Slow pts/uvg266-1.0.0 -i Bosphorus_3840x2160.y4m --preset medium Video Input: Bosphorus 4K - Video Preset: Medium pts/uvg266-1.0.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset slow Video Input: Bosphorus 1080p - Video Preset: Slow pts/uvg266-1.0.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset medium Video Input: Bosphorus 1080p - Video Preset: Medium pts/uvg266-1.0.0 -i Bosphorus_3840x2160.y4m --preset veryfast Video Input: Bosphorus 4K - Video Preset: Very Fast pts/uvg266-1.0.0 -i Bosphorus_3840x2160.y4m --preset superfast Video Input: Bosphorus 4K - Video Preset: Super Fast pts/uvg266-1.0.0 -i Bosphorus_3840x2160.y4m --preset ultrafast Video Input: Bosphorus 4K - Video Preset: Ultra Fast pts/uvg266-1.0.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset veryfast Video Input: Bosphorus 1080p - Video Preset: Very Fast pts/uvg266-1.0.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset superfast Video Input: Bosphorus 1080p - Video Preset: Super Fast pts/uvg266-1.0.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset ultrafast Video Input: Bosphorus 1080p - Video Preset: Ultra Fast pts/vvenc-1.8.0 -i Bosphorus_3840x2160.y4m --preset fast Video Input: Bosphorus 4K - Video Preset: Fast pts/vvenc-1.8.0 -i Bosphorus_3840x2160.y4m --preset faster Video Input: Bosphorus 4K - Video Preset: Faster pts/vvenc-1.8.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset fast Video Input: Bosphorus 1080p - Video Preset: Fast pts/vvenc-1.8.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset faster Video Input: Bosphorus 1080p - Video Preset: Faster pts/build-godot-4.0.0 Time To Compile pts/embree-1.4.0 pathtracer_ispc -c crown/crown.ecs Binary: Pathtracer ISPC - Model: Crown pts/embree-1.4.0 pathtracer -c asian_dragon/asian_dragon.ecs Binary: Pathtracer - Model: Asian Dragon pts/embree-1.4.0 pathtracer_ispc -c asian_dragon/asian_dragon.ecs Binary: Pathtracer ISPC - Model: Asian Dragon pts/specfem3d-1.0.0 Mount_StHelens Model: Mount St. Helens pts/embree-1.4.0 pathtracer -c crown/crown.ecs Binary: Pathtracer - Model: Crown pts/openvkl-1.3.0 vklBenchmark --benchmark_filter=ispc Benchmark: vklBenchmark ISPC pts/openvkl-1.3.0 vklBenchmark --benchmark_filter=scalar Benchmark: vklBenchmark Scalar pts/build2-1.2.0 Time To Compile pts/build-nodejs-1.3.0 Time To Compile pts/nginx-3.0.1 -c 100 Connections: 100 pts/nginx-3.0.1 -c 200 Connections: 200 pts/nginx-3.0.1 -c 500 Connections: 500 pts/nginx-3.0.1 -c 1000 Connections: 1000 pts/apache-3.0.0 -c 100 Concurrent Requests: 100 pts/deeprec-1.0.2 dcnv2 fp32 Model: DCNv2 - Data Type: FP32 pts/apache-3.0.0 -c 200 Concurrent Requests: 200 pts/apache-3.0.0 -c 500 Concurrent Requests: 500 pts/apache-3.0.0 -c 1000 Concurrent Requests: 1000 pts/openssl-3.1.0 sha256 Algorithm: SHA256 pts/openssl-3.1.0 sha512 Algorithm: SHA512 pts/openssl-3.1.0 rsa4096 Algorithm: RSA4096 pts/openssl-3.1.0 -evp chacha20 Algorithm: ChaCha20 pts/openssl-3.1.0 -evp aes-128-gcm Algorithm: AES-128-GCM pts/openssl-3.1.0 -evp aes-256-gcm Algorithm: AES-256-GCM pts/openssl-3.1.0 -evp chacha20-poly1305 Algorithm: ChaCha20-Poly1305 pts/clickhouse-1.2.0 100M Rows Hits Dataset, First Run / Cold Cache pts/clickhouse-1.2.0 100M Rows Hits Dataset, Second Run pts/clickhouse-1.2.0 100M Rows Hits Dataset, Third Run pts/deeprec-1.0.2 mmoe bf16 Model: MMOE - Data Type: BF16 pts/deeprec-1.0.2 mmoe fp32 Model: MMOE - Data Type: FP32 pts/deeprec-1.0.2 dlrm bf16 Model: DLRM - Data Type: BF16 pts/deeprec-1.0.2 dlrm fp32 Model: DLRM - Data Type: FP32 pts/cockroach-1.0.2 movr --concurrency 128 Workload: MoVR - Concurrency: 128 pts/deeprec-1.0.2 ple fp32 Model: PLE - Data Type: FP32 pts/deeprec-1.0.2 din bf16 Model: DIN - Data Type: BF16 pts/deeprec-1.0.2 din fp32 Model: DIN - Data Type: FP32 pts/deeprec-1.0.2 bst fp32 Model: BST - Data Type: FP32 pts/deeprec-1.0.2 bst bf16 Model: BST - Data Type: BF16 pts/cockroach-1.0.2 movr --concurrency 256 Workload: MoVR - Concurrency: 256 pts/deeprec-1.0.2 dcnv2 bf16 Model: DCNv2 - Data Type: BF16 pts/intel-tensorflow-1.0.0 resnet50_int8_pretrained_model.pb 96 Model: resnet50_int8_pretrained_model - Batch Size: 96 pts/cockroach-1.0.2 movr --concurrency 512 Workload: MoVR - Concurrency: 512 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 10 --concurrency 128 Workload: KV, 10% Reads - Concurrency: 128 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 10 --concurrency 256 Workload: KV, 10% Reads - Concurrency: 256 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 10 --concurrency 512 Workload: KV, 10% Reads - Concurrency: 512 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 50 --concurrency 128 Workload: KV, 50% Reads - Concurrency: 128 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 50 --concurrency 256 Workload: KV, 50% Reads - Concurrency: 256 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 50 --concurrency 512 Workload: KV, 50% Reads - Concurrency: 512 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 60 --concurrency 128 Workload: KV, 60% Reads - Concurrency: 128 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 60 --concurrency 256 Workload: KV, 60% Reads - Concurrency: 256 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 60 --concurrency 512 Workload: KV, 60% Reads - Concurrency: 512 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 95 --concurrency 128 Workload: KV, 95% Reads - Concurrency: 128 pts/intel-tensorflow-1.0.0 inceptionv4_fp32_pretrained_model.pb 1 Model: inceptionv4_fp32_pretrained_model - Batch Size: 1 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 95 --concurrency 256 Workload: KV, 95% Reads - Concurrency: 256 pts/intel-tensorflow-1.0.0 inceptionv4_fp32_pretrained_model.pb 64 Model: inceptionv4_fp32_pretrained_model - Batch Size: 64 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 95 --concurrency 512 Workload: KV, 95% Reads - Concurrency: 512 pts/intel-tensorflow-1.0.0 resnet50_int8_pretrained_model.pb 512 Model: resnet50_int8_pretrained_model - Batch Size: 512 pts/intel-tensorflow-1.0.0 resnet50_fp32_pretrained_model.pb 960 Model: resnet50_fp32_pretrained_model - Batch Size: 960 pts/intel-tensorflow-1.0.0 resnet50_int8_pretrained_model.pb 256 Model: resnet50_int8_pretrained_model - Batch Size: 256 pts/intel-tensorflow-1.0.0 inceptionv4_fp32_pretrained_model.pb 32 Model: inceptionv4_fp32_pretrained_model - Batch Size: 32 pts/intel-tensorflow-1.0.0 resnet50_fp32_pretrained_model.pb 512 Model: resnet50_fp32_pretrained_model - Batch Size: 512 pts/intel-tensorflow-1.0.0 inceptionv4_int8_pretrained_model.pb 1 Model: inceptionv4_int8_pretrained_model - Batch Size: 1 pts/intel-tensorflow-1.0.0 mobilenetv1_int8_pretrained_model.pb 1 Model: mobilenetv1_int8_pretrained_model - Batch Size: 1 pts/intel-tensorflow-1.0.0 resnet50_int8_pretrained_model.pb 16 Model: resnet50_int8_pretrained_model - Batch Size: 16 pts/intel-tensorflow-1.0.0 resnet50_int8_pretrained_model.pb 32 Model: resnet50_int8_pretrained_model - Batch Size: 32 pts/intel-tensorflow-1.0.0 resnet50_fp32_pretrained_model.pb 64 Model: resnet50_fp32_pretrained_model - Batch Size: 64 pts/intel-tensorflow-1.0.0 resnet50_fp32_pretrained_model.pb 96 Model: resnet50_fp32_pretrained_model - Batch Size: 96 pts/intel-tensorflow-1.0.0 resnet50_fp32_pretrained_model.pb 32 Model: resnet50_fp32_pretrained_model - Batch Size: 32 pts/intel-tensorflow-1.0.0 resnet50_fp32_pretrained_model.pb 16 Model: resnet50_fp32_pretrained_model - Batch Size: 16 pts/intel-tensorflow-1.0.0 resnet50_int8_pretrained_model.pb 1 Model: resnet50_int8_pretrained_model - Batch Size: 1 pts/intel-tensorflow-1.0.0 resnet50_int8_pretrained_model.pb 960 Model: resnet50_int8_pretrained_model - Batch Size: 960 pts/intel-tensorflow-1.0.0 resnet50_int8_pretrained_model.pb 64 Model: resnet50_int8_pretrained_model - Batch Size: 64 pts/petsc-1.0.0 streams Test: Streams pts/rocksdb-1.5.0 --benchmarks="readrandom" Test: Random Read pts/intel-tensorflow-1.0.0 mobilenetv1_fp32_pretrained_model.pb 1 Model: mobilenetv1_fp32_pretrained_model - Batch Size: 1 pts/intel-tensorflow-1.0.0 resnet50_fp32_pretrained_model.pb 256 Model: resnet50_fp32_pretrained_model - Batch Size: 256 pts/intel-tensorflow-1.0.0 inceptionv4_fp32_pretrained_model.pb 16 Model: inceptionv4_fp32_pretrained_model - Batch Size: 16 pts/intel-tensorflow-1.0.0 inceptionv4_fp32_pretrained_model.pb 96 Model: inceptionv4_fp32_pretrained_model - Batch Size: 96 pts/rocksdb-1.5.0 --benchmarks="updaterandom" Test: Update Random pts/intel-tensorflow-1.0.0 inceptionv4_fp32_pretrained_model.pb 512 Model: inceptionv4_fp32_pretrained_model - Batch Size: 512 pts/intel-tensorflow-1.0.0 resnet50_fp32_pretrained_model.pb 1 Model: resnet50_fp32_pretrained_model - Batch Size: 1 pts/intel-tensorflow-1.0.0 inceptionv4_fp32_pretrained_model.pb 256 Model: inceptionv4_fp32_pretrained_model - Batch Size: 256 pts/intel-tensorflow-1.0.0 mobilenetv1_fp32_pretrained_model.pb 960 Model: mobilenetv1_fp32_pretrained_model - Batch Size: 960 pts/intel-tensorflow-1.0.0 mobilenetv1_int8_pretrained_model.pb 512 Model: mobilenetv1_int8_pretrained_model - Batch Size: 512 pts/intel-tensorflow-1.0.0 inceptionv4_int8_pretrained_model.pb 512 Model: inceptionv4_int8_pretrained_model - Batch Size: 512 pts/intel-tensorflow-1.0.0 mobilenetv1_fp32_pretrained_model.pb 256 Model: mobilenetv1_fp32_pretrained_model - Batch Size: 256 pts/intel-tensorflow-1.0.0 mobilenetv1_fp32_pretrained_model.pb 32 Model: mobilenetv1_fp32_pretrained_model - Batch Size: 32 pts/intel-tensorflow-1.0.0 mobilenetv1_fp32_pretrained_model.pb 64 Model: mobilenetv1_fp32_pretrained_model - Batch Size: 64 pts/intel-tensorflow-1.0.0 inceptionv4_int8_pretrained_model.pb 96 Model: inceptionv4_int8_pretrained_model - Batch Size: 96 pts/intel-tensorflow-1.0.0 mobilenetv1_fp32_pretrained_model.pb 16 Model: mobilenetv1_fp32_pretrained_model - Batch Size: 16 pts/intel-tensorflow-1.0.0 inceptionv4_int8_pretrained_model.pb 64 Model: inceptionv4_int8_pretrained_model - Batch Size: 64 pts/intel-tensorflow-1.0.0 inceptionv4_int8_pretrained_model.pb 32 Model: inceptionv4_int8_pretrained_model - Batch Size: 32 pts/intel-tensorflow-1.0.0 inceptionv4_int8_pretrained_model.pb 16 Model: inceptionv4_int8_pretrained_model - Batch Size: 16 pts/intel-tensorflow-1.0.0 inceptionv4_fp32_pretrained_model.pb 960 Model: inceptionv4_fp32_pretrained_model - Batch Size: 960 pts/intel-tensorflow-1.0.0 mobilenetv1_fp32_pretrained_model.pb 96 Model: mobilenetv1_fp32_pretrained_model - Batch Size: 96 pts/intel-tensorflow-1.0.0 inceptionv4_int8_pretrained_model.pb 256 Model: inceptionv4_int8_pretrained_model - Batch Size: 256 pts/intel-tensorflow-1.0.0 mobilenetv1_int8_pretrained_model.pb 16 Model: mobilenetv1_int8_pretrained_model - Batch Size: 16 pts/intel-tensorflow-1.0.0 inceptionv4_int8_pretrained_model.pb 960 Model: inceptionv4_int8_pretrained_model - Batch Size: 960 pts/intel-tensorflow-1.0.0 mobilenetv1_int8_pretrained_model.pb 32 Model: mobilenetv1_int8_pretrained_model - Batch Size: 32 pts/intel-tensorflow-1.0.0 mobilenetv1_fp32_pretrained_model.pb 512 Model: mobilenetv1_fp32_pretrained_model - Batch Size: 512 pts/intel-tensorflow-1.0.0 mobilenetv1_int8_pretrained_model.pb 64 Model: mobilenetv1_int8_pretrained_model - Batch Size: 64 pts/intel-tensorflow-1.0.0 mobilenetv1_int8_pretrained_model.pb 256 Model: mobilenetv1_int8_pretrained_model - Batch Size: 256 pts/intel-tensorflow-1.0.0 mobilenetv1_int8_pretrained_model.pb 96 Model: mobilenetv1_int8_pretrained_model - Batch Size: 96 pts/rocksdb-1.5.0 --benchmarks="readwhilewriting" Test: Read While Writing pts/rocksdb-1.5.0 --benchmarks="readrandomwriterandom" Test: Read Random Write Random