new tests feb

2 x AMD EPYC 7773X 64-Core testing with a AMD DAYTONA_X (RYM1009B BIOS) and ASPEED on Ubuntu 20.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2302094-NE-NEWTESTSF98
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

C/C++ Compiler Tests 2 Tests
Creator Workloads 3 Tests
Database Test Suite 4 Tests
Encoding 2 Tests
Common Kernel Benchmarks 2 Tests
Multi-Core 4 Tests
Server 4 Tests
Video Encoding 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
February 09 2023
  2 Hours, 11 Minutes
b
February 09 2023
  2 Hours, 11 Minutes
c
February 09 2023
  2 Hours, 12 Minutes
Invert Hiding All Results Option
  2 Hours, 11 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


new tests feb Suite 1.0.0 System Test suite extracted from new tests feb. pts/deepsparse-1.3.2 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario async Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario sync Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:nlp/sentiment_analysis/bert-base/pytorch/huggingface/sst2/12layer_pruned90-none --scenario async Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:nlp/sentiment_analysis/bert-base/pytorch/huggingface/sst2/12layer_pruned90-none --scenario sync Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario async Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario sync Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario async Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario sync Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario async Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario sync Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario async Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario sync Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none --scenario async Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none --scenario sync Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --scenario async Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --scenario sync Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario async Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario sync Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream pts/aom-av1-3.6.0 --cpu-used=0 --limit=20 Bosphorus_3840x2160.y4m Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K pts/aom-av1-3.6.0 --cpu-used=4 Bosphorus_3840x2160.y4m Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K pts/aom-av1-3.6.0 --cpu-used=6 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K pts/aom-av1-3.6.0 --cpu-used=6 Bosphorus_3840x2160.y4m Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K pts/aom-av1-3.6.0 --cpu-used=8 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K pts/aom-av1-3.6.0 --cpu-used=9 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K pts/aom-av1-3.6.0 --cpu-used=10 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K pts/aom-av1-3.6.0 --cpu-used=0 --limit=20 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p pts/aom-av1-3.6.0 --cpu-used=4 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p pts/aom-av1-3.6.0 --cpu-used=6 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p pts/aom-av1-3.6.0 --cpu-used=6 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p pts/aom-av1-3.6.0 --cpu-used=8 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p pts/aom-av1-3.6.0 --cpu-used=9 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p pts/aom-av1-3.6.0 --cpu-used=10 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p pts/vvenc-1.0.0 -i Bosphorus_3840x2160.y4m --preset fast Video Input: Bosphorus 4K - Video Preset: Fast pts/vvenc-1.0.0 -i Bosphorus_3840x2160.y4m --preset faster Video Input: Bosphorus 4K - Video Preset: Faster pts/vvenc-1.0.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset fast Video Input: Bosphorus 1080p - Video Preset: Fast pts/vvenc-1.0.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset faster Video Input: Bosphorus 1080p - Video Preset: Faster pts/embree-1.3.0 pathtracer -c crown/crown.ecs Binary: Pathtracer - Model: Crown pts/embree-1.3.0 pathtracer_ispc -c crown/crown.ecs Binary: Pathtracer ISPC - Model: Crown pts/embree-1.3.0 pathtracer -c asian_dragon/asian_dragon.ecs Binary: Pathtracer - Model: Asian Dragon pts/embree-1.3.0 pathtracer -c asian_dragon_obj/asian_dragon.ecs Binary: Pathtracer - Model: Asian Dragon Obj pts/embree-1.3.0 pathtracer_ispc -c asian_dragon/asian_dragon.ecs Binary: Pathtracer ISPC - Model: Asian Dragon pts/embree-1.3.0 pathtracer_ispc -c asian_dragon_obj/asian_dragon.ecs Binary: Pathtracer ISPC - Model: Asian Dragon Obj pts/clickhouse-1.2.0 100M Rows Hits Dataset, First Run / Cold Cache pts/clickhouse-1.2.0 100M Rows Hits Dataset, Second Run pts/clickhouse-1.2.0 100M Rows Hits Dataset, Third Run pts/memcached-1.1.0 --ratio=1:5 Set To Get Ratio: 1:5 pts/memcached-1.1.0 --ratio=1:10 Set To Get Ratio: 1:10 pts/memcached-1.1.0 --ratio=1:100 Set To Get Ratio: 1:100 pts/rocksdb-1.4.0 --benchmarks="fillrandom" Test: Random Fill pts/rocksdb-1.4.0 --benchmarks="readrandom" Test: Random Read pts/rocksdb-1.4.0 --benchmarks="updaterandom" Test: Update Random pts/rocksdb-1.4.0 --benchmarks="fillseq" Test: Sequential Fill pts/rocksdb-1.4.0 --benchmarks="fillsync" Test: Random Fill Sync pts/rocksdb-1.4.0 --benchmarks="readwhilewriting" Test: Read While Writing pts/rocksdb-1.4.0 --benchmarks="readrandomwriterandom" Test: Read Random Write Random pts/pgbench-1.13.0 -s 1 -c 500 -S Scaling Factor: 1 - Clients: 500 - Mode: Read Only pts/pgbench-1.13.0 -s 1 -c 500 -S Scaling Factor: 1 - Clients: 500 - Mode: Read Only - Average Latency pts/pgbench-1.13.0 -s 1 -c 800 -S Scaling Factor: 1 - Clients: 800 - Mode: Read Only pts/pgbench-1.13.0 -s 1 -c 800 -S Scaling Factor: 1 - Clients: 800 - Mode: Read Only - Average Latency pts/pgbench-1.13.0 -s 1 -c 1000 -S Scaling Factor: 1 - Clients: 1000 - Mode: Read Only pts/pgbench-1.13.0 -s 1 -c 1000 -S Scaling Factor: 1 - Clients: 1000 - Mode: Read Only - Average Latency pts/pgbench-1.13.0 -s 1 -c 500 Scaling Factor: 1 - Clients: 500 - Mode: Read Write pts/pgbench-1.13.0 -s 1 -c 500 Scaling Factor: 1 - Clients: 500 - Mode: Read Write - Average Latency pts/pgbench-1.13.0 -s 1 -c 5000 -S Scaling Factor: 1 - Clients: 5000 - Mode: Read Only pts/pgbench-1.13.0 -s 1 -c 800 Scaling Factor: 1 - Clients: 800 - Mode: Read Write pts/pgbench-1.13.0 -s 1 -c 800 Scaling Factor: 1 - Clients: 800 - Mode: Read Write - Average Latency pts/pgbench-1.13.0 -s 1 -c 1000 Scaling Factor: 1 - Clients: 1000 - Mode: Read Write pts/pgbench-1.13.0 -s 1 -c 1000 Scaling Factor: 1 - Clients: 1000 - Mode: Read Write - Average Latency pts/pgbench-1.13.0 -s 1 -c 5000 Scaling Factor: 1 - Clients: 5000 - Mode: Read Write pts/pgbench-1.13.0 -s 100 -c 500 -S Scaling Factor: 100 - Clients: 500 - Mode: Read Only pts/pgbench-1.13.0 -s 100 -c 500 -S Scaling Factor: 100 - Clients: 500 - Mode: Read Only - Average Latency pts/pgbench-1.13.0 -s 100 -c 800 -S Scaling Factor: 100 - Clients: 800 - Mode: Read Only pts/pgbench-1.13.0 -s 100 -c 800 -S Scaling Factor: 100 - Clients: 800 - Mode: Read Only - Average Latency pts/pgbench-1.13.0 -s 100 -c 1000 -S Scaling Factor: 100 - Clients: 1000 - Mode: Read Only pts/pgbench-1.13.0 -s 100 -c 1000 -S Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency pts/pgbench-1.13.0 -s 100 -c 500 Scaling Factor: 100 - Clients: 500 - Mode: Read Write pts/pgbench-1.13.0 -s 100 -c 500 Scaling Factor: 100 - Clients: 500 - Mode: Read Write - Average Latency pts/pgbench-1.13.0 -s 100 -c 5000 -S Scaling Factor: 100 - Clients: 5000 - Mode: Read Only pts/pgbench-1.13.0 -s 100 -c 800 Scaling Factor: 100 - Clients: 800 - Mode: Read Write pts/pgbench-1.13.0 -s 100 -c 800 Scaling Factor: 100 - Clients: 800 - Mode: Read Write - Average Latency pts/pgbench-1.13.0 -s 100 -c 1000 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write pts/pgbench-1.13.0 -s 100 -c 1000 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency pts/pgbench-1.13.0 -s 100 -c 5000 Scaling Factor: 100 - Clients: 5000 - Mode: Read Write