zen 1 epyc

2 x AMD EPYC 7601 32-Core testing with a Dell 02MJ3T (1.2.5 BIOS) and Matrox G200eW3 on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2211196-NE-ZEN1EPYC153
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 3 Tests
C++ Boost Tests 2 Tests
Timed Code Compilation 5 Tests
C/C++ Compiler Tests 9 Tests
CPU Massive 13 Tests
Creator Workloads 17 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 2 Tests
Cryptography 4 Tests
Database Test Suite 2 Tests
Encoding 5 Tests
Game Development 2 Tests
HPC - High Performance Computing 13 Tests
Imaging 6 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 8 Tests
Molecular Dynamics 2 Tests
Multi-Core 19 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 3 Tests
OpenMPI Tests 4 Tests
Programmer / Developer System Benchmarks 7 Tests
Python Tests 8 Tests
Renderers 3 Tests
Scientific Computing 2 Tests
Server 4 Tests
Server CPU Tests 6 Tests
Single-Threaded 2 Tests
Video Encoding 4 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
November 18 2022
  16 Hours, 26 Minutes
b
November 19 2022
  15 Hours, 54 Minutes
Invert Hiding All Results Option
  16 Hours, 10 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


zen 1 epyc Suite 1.0.0 System Test suite extracted from zen 1 epyc. pts/tensorflow-2.0.0 --device cpu --batch_size=512 --model=resnet50 Device: CPU - Batch Size: 512 - Model: ResNet-50 pts/tensorflow-2.0.0 --device cpu --batch_size=256 --model=resnet50 Device: CPU - Batch Size: 256 - Model: ResNet-50 pts/tensorflow-2.0.0 --device cpu --batch_size=512 --model=googlenet Device: CPU - Batch Size: 512 - Model: GoogLeNet pts/ai-benchmark-1.0.2 Device AI Score pts/ai-benchmark-1.0.2 Device Training Score pts/ai-benchmark-1.0.2 Device Inference Score pts/brl-cad-1.3.0 VGR Performance Metric pts/tensorflow-2.0.0 --device cpu --batch_size=256 --model=googlenet Device: CPU - Batch Size: 256 - Model: GoogLeNet pts/tensorflow-2.0.0 --device cpu --batch_size=512 --model=alexnet Device: CPU - Batch Size: 512 - Model: AlexNet pts/openfoam-1.2.0 incompressible/simpleFoam/drivaerFastback/ -m M Input: drivaerFastback, Medium Mesh Size - Execution Time pts/openfoam-1.2.0 incompressible/simpleFoam/drivaerFastback/ -m M Input: drivaerFastback, Medium Mesh Size - Mesh Time pts/tensorflow-2.0.0 --device cpu --batch_size=64 --model=resnet50 Device: CPU - Batch Size: 64 - Model: ResNet-50 pts/smhasher-1.1.0 --test=Speed sha3-256 Hash: SHA3-256 pts/ncnn-1.4.0 -1 Target: CPU - Model: FastestDet pts/ncnn-1.4.0 -1 Target: CPU - Model: vision_transformer pts/ncnn-1.4.0 -1 Target: CPU - Model: regnety_400m pts/ncnn-1.4.0 -1 Target: CPU - Model: squeezenet_ssd pts/ncnn-1.4.0 -1 Target: CPU - Model: yolov4-tiny pts/ncnn-1.4.0 -1 Target: CPU - Model: resnet50 pts/ncnn-1.4.0 -1 Target: CPU - Model: alexnet pts/ncnn-1.4.0 -1 Target: CPU - Model: resnet18 pts/ncnn-1.4.0 -1 Target: CPU - Model: vgg16 pts/ncnn-1.4.0 -1 Target: CPU - Model: googlenet pts/ncnn-1.4.0 -1 Target: CPU - Model: blazeface pts/ncnn-1.4.0 -1 Target: CPU - Model: efficientnet-b0 pts/ncnn-1.4.0 -1 Target: CPU - Model: mnasnet pts/ncnn-1.4.0 -1 Target: CPU - Model: shufflenet-v2 pts/ncnn-1.4.0 -1 Target: CPU-v3-v3 - Model: mobilenet-v3 pts/ncnn-1.4.0 -1 Target: CPU-v2-v2 - Model: mobilenet-v2 pts/ncnn-1.4.0 -1 Target: CPU - Model: mobilenet pts/jpegxl-1.5.0 sample-4.png out.jxl -q 100 --num_reps 10 Input: PNG - Quality: 100 pts/jpegxl-1.5.0 --lossless_jpeg=0 sample-photo-6000x4000.JPG out.jxl -q 100 --num_reps 10 Input: JPEG - Quality: 100 pts/webp2-1.2.0 -q 100 -effort 9 Encode Settings: Quality 100, Lossless Compression pts/mnn-2.1.0 Model: inception-v3 pts/mnn-2.1.0 Model: mobilenet-v1-1.0 pts/mnn-2.1.0 Model: MobileNetV2_224 pts/mnn-2.1.0 Model: SqueezeNetV1.0 pts/mnn-2.1.0 Model: resnet-v2-50 pts/mnn-2.1.0 Model: squeezenetv1.1 pts/mnn-2.1.0 Model: mobilenetV3 pts/mnn-2.1.0 Model: nasnet pts/blender-3.3.1 -b ../barbershop_interior_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Barbershop - Compute: CPU-Only pts/tensorflow-2.0.0 --device cpu --batch_size=256 --model=alexnet Device: CPU - Batch Size: 256 - Model: AlexNet pts/ffmpeg-3.0.0 --encoder=libx264 upload Encoder: libx264 - Scenario: Upload pts/build-python-1.0.0 --enable-optimizations --with-lto Build Configuration: Released Build, PGO + LTO Optimized pts/ffmpeg-3.0.0 --encoder=libx265 vod Encoder: libx265 - Scenario: Video On Demand pts/ffmpeg-3.0.0 --encoder=libx265 platform Encoder: libx265 - Scenario: Platform pts/tensorflow-2.0.0 --device cpu --batch_size=32 --model=resnet50 Device: CPU - Batch Size: 32 - Model: ResNet-50 pts/lammps-1.4.0 benchmark_20k_atoms.in Model: 20k Atoms pts/ffmpeg-3.0.0 --encoder=libx265 upload Encoder: libx265 - Scenario: Upload pts/ffmpeg-3.0.0 --encoder=libx264 vod Encoder: libx264 - Scenario: Video On Demand pts/ffmpeg-3.0.0 --encoder=libx264 platform Encoder: libx264 - Scenario: Platform pts/tensorflow-2.0.0 --device cpu --batch_size=64 --model=googlenet Device: CPU - Batch Size: 64 - Model: GoogLeNet pts/tensorflow-2.0.0 --device cpu --batch_size=16 --model=resnet50 Device: CPU - Batch Size: 16 - Model: ResNet-50 pts/openradioss-1.0.0 fsi_drop_container_0000.rad fsi_drop_container_0001.rad Model: INIVOL and Fluid Structure Interaction Drop Container pts/build-nodejs-1.2.0 Time To Compile pts/jpegxl-1.5.0 --lossless_jpeg=0 sample-photo-6000x4000.JPG out.jxl -q 80 --num_reps 50 Input: JPEG - Quality: 80 pts/openradioss-1.0.0 BIRD_WINDSHIELD_v1_0000.rad BIRD_WINDSHIELD_v1_0001.rad Model: Bird Strike on Windshield pts/jpegxl-1.5.0 sample-4.png out.jxl -q 80 --num_reps 50 Input: PNG - Quality: 80 pts/ospray-studio-1.1.0 --cameras 3 3 --resolution 3840 2160 --spp 32 --renderer pathtracer Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer pts/ospray-studio-1.1.0 --cameras 2 2 --resolution 1920 1080 --spp 16 --renderer pathtracer Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer pts/ospray-studio-1.1.0 --cameras 1 1 --resolution 1920 1080 --spp 16 --renderer pathtracer Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer pts/minibude-1.0.0 --deck ../data/bm2 --iterations 10 Implementation: OpenMP - Input Deck: BM2 pts/ospray-studio-1.1.0 --cameras 2 2 --resolution 3840 2160 --spp 32 --renderer pathtracer Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer pts/jpegxl-1.5.0 --lossless_jpeg=0 sample-photo-6000x4000.JPG out.jxl -q 90 --num_reps 40 Input: JPEG - Quality: 90 pts/nekrs-1.0.0 turbPipePeriodic turbPipe.par Input: TurboPipe Periodic pts/ospray-studio-1.1.0 --cameras 1 1 --resolution 3840 2160 --spp 32 --renderer pathtracer Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer pts/jpegxl-1.5.0 sample-4.png out.jxl -q 90 --num_reps 40 Input: PNG - Quality: 90 pts/ospray-studio-1.1.0 --cameras 3 3 --resolution 1920 1080 --spp 32 --renderer pathtracer Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer pts/ospray-studio-1.1.0 --cameras 3 3 --resolution 3840 2160 --spp 1 --renderer pathtracer Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer pts/avifenc-1.3.0 -s 0 Encoder Speed: 0 pts/ospray-studio-1.1.0 --cameras 2 2 --resolution 3840 2160 --spp 1 --renderer pathtracer Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer pts/ospray-studio-1.1.0 --cameras 1 1 --resolution 3840 2160 --spp 1 --renderer pathtracer Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer pts/blosc-1.2.0 blosclz bitshuffle Test: blosclz bitshuffle pts/aom-av1-3.5.0 --cpu-used=4 Bosphorus_3840x2160.y4m Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K pts/tensorflow-2.0.0 --device cpu --batch_size=32 --model=googlenet Device: CPU - Batch Size: 32 - Model: GoogLeNet pts/openradioss-1.0.0 RUBBER_SEAL_IMPDISP_GEOM_0000.rad RUBBER_SEAL_IMPDISP_GEOM_0001.rad Model: Rubber O-Ring Seal Installation pts/ospray-studio-1.1.0 --cameras 2 2 --resolution 1920 1080 --spp 32 --renderer pathtracer Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer pts/blender-3.3.1 -b ../pavillon_barcelone_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Pabellon Barcelona - Compute: CPU-Only pts/svt-av1-2.6.0 --preset 4 -n 160 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 4 - Input: Bosphorus 4K pts/build-erlang-1.2.0 Time To Compile pts/ospray-studio-1.1.0 --cameras 1 1 --resolution 1920 1080 --spp 32 --renderer pathtracer Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer pts/tensorflow-2.0.0 --device cpu --batch_size=64 --model=alexnet Device: CPU - Batch Size: 64 - Model: AlexNet pts/webp2-1.2.0 -q 95 -effort 7 Encode Settings: Quality 95, Compression Effort 7 pts/node-web-tooling-1.0.1 pts/ffmpeg-3.0.0 --encoder=libx265 live Encoder: libx265 - Scenario: Live pts/ospray-studio-1.1.0 --cameras 3 3 --resolution 1920 1080 --spp 1 --renderer pathtracer Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer pts/ospray-studio-1.1.0 --cameras 2 2 --resolution 1920 1080 --spp 1 --renderer pathtracer Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer pts/ospray-studio-1.1.0 --cameras 1 1 --resolution 1920 1080 --spp 1 --renderer pathtracer Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer pts/openradioss-1.0.0 Bumper_Beam_AP_meshed_0000.rad Bumper_Beam_AP_meshed_0001.rad Model: Bumper Beam pts/cpuminer-opt-1.6.0 -a allium Algorithm: Garlicoin pts/spacy-1.0.0 Model: en_core_web_trf pts/spacy-1.0.0 Model: en_core_web_lg pts/deepsparse-1.0.1 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario async Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream pts/ospray-studio-1.1.0 --cameras 3 3 --resolution 1920 1080 --spp 16 --renderer pathtracer Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer pts/ospray-studio-1.1.0 --cameras 3 3 --resolution 3840 2160 --spp 16 --renderer pathtracer Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer pts/openfoam-1.2.0 incompressible/simpleFoam/drivaerFastback/ -m S Input: drivaerFastback, Small Mesh Size - Execution Time pts/openfoam-1.2.0 incompressible/simpleFoam/drivaerFastback/ -m S Input: drivaerFastback, Small Mesh Size - Mesh Time pts/onednn-2.7.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=bf16bf16bf16 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=f32 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-2.7.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=u8s8f32 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/jpegxl-decode-1.5.0 --num_threads=1 --num_reps=100 CPU Threads: 1 pts/aom-av1-3.5.0 --cpu-used=0 --limit=20 Bosphorus_3840x2160.y4m Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K pts/ospray-studio-1.1.0 --cameras 2 2 --resolution 3840 2160 --spp 16 --renderer pathtracer Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer pts/xmrig-1.1.0 --bench=1M Variant: Monero - Hash Count: 1M pts/nginx-3.0.0 -c 1000 Connections: 1000 pts/nginx-3.0.0 -c 500 Connections: 500 pts/nginx-3.0.0 -c 200 Connections: 200 pts/onednn-2.7.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=f32 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-2.7.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=bf16bf16bf16 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=u8s8f32 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/ospray-studio-1.1.0 --cameras 1 1 --resolution 3840 2160 --spp 16 --renderer pathtracer Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer pts/aom-av1-3.5.0 --cpu-used=6 Bosphorus_3840x2160.y4m Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K pts/blender-3.3.1 -b ../classroom_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Classroom - Compute: CPU-Only pts/blosc-1.2.0 blosclz shuffle Test: blosclz shuffle pts/openvino-1.1.0 -m models/intel/person-detection-0106/FP32/person-detection-0106.xml -d CPU Model: Person Detection FP32 - Device: CPU pts/openvino-1.1.0 -m models/intel/person-detection-0106/FP16/person-detection-0106.xml -d CPU Model: Person Detection FP16 - Device: CPU pts/encodec-1.0.1 -b 24 Target Bandwidth: 24 kbps pts/avifenc-1.3.0 -s 2 Encoder Speed: 2 pts/tensorflow-2.0.0 --device cpu --batch_size=16 --model=googlenet Device: CPU - Batch Size: 16 - Model: GoogLeNet pts/tensorflow-2.0.0 --device cpu --batch_size=32 --model=alexnet Device: CPU - Batch Size: 32 - Model: AlexNet pts/encodec-1.0.1 -b 6 Target Bandwidth: 6 kbps pts/deepsparse-1.0.1 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario async Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream pts/openvino-1.1.0 -m models/intel/face-detection-0206/FP16-INT8/face-detection-0206.xml -d CPU Model: Face Detection FP16-INT8 - Device: CPU pts/build-wasmer-1.2.0 Time To Compile pts/openvino-1.1.0 -m models/intel/face-detection-0206/FP16/face-detection-0206.xml -d CPU Model: Face Detection FP16 - Device: CPU pts/encodec-1.0.1 -b 1.5 Target Bandwidth: 1.5 kbps pts/deepsparse-1.0.1 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario async Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream pts/aom-av1-3.5.0 --cpu-used=4 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p pts/srsran-1.2.0 lib/src/phy/dft/test/ofdm_test -N 2048 -n 100 -r 500000 Test: OFDM_Test pts/deepsparse-1.0.1 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --scenario async Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream pts/ffmpeg-3.0.0 --encoder=libx264 live Encoder: libx264 - Scenario: Live pts/cpuminer-opt-1.6.0 -a myr-gr Algorithm: Myriad-Groestl pts/encodec-1.0.1 -b 3 Target Bandwidth: 3 kbps pts/webp2-1.2.0 -q 75 -effort 7 Encode Settings: Quality 75, Compression Effort 7 pts/srsran-1.2.0 lib/test/phy/phy_dl_test -p 100 -s 20000 -m 27 -t 4 -q Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM pts/openvino-1.1.0 -m models/intel/person-vehicle-bike-detection-2004/FP16/person-vehicle-bike-detection-2004.xml -d CPU Model: Person Vehicle Bike Detection FP16 - Device: CPU pts/openvino-1.1.0 -m models/intel/machine-translation-nar-en-de-0002/FP16/machine-translation-nar-en-de-0002.xml -d CPU Model: Machine Translation EN To DE FP16 - Device: CPU pts/build-php-1.6.0 Time To Compile pts/openvino-1.1.0 -m models/intel/vehicle-detection-0202/FP16-INT8/vehicle-detection-0202.xml -d CPU Model: Vehicle Detection FP16-INT8 - Device: CPU pts/openvino-1.1.0 -m models/intel/vehicle-detection-0202/FP16/vehicle-detection-0202.xml -d CPU Model: Vehicle Detection FP16 - Device: CPU pts/openvino-1.1.0 -m models/intel/weld-porosity-detection-0001/FP16-INT8/weld-porosity-detection-0001.xml -d CPU Model: Weld Porosity Detection FP16-INT8 - Device: CPU pts/graphics-magick-2.1.0 -resize 50% Operation: Resizing pts/openvino-1.1.0 -m models/intel/age-gender-recognition-retail-0013/FP16-INT8/age-gender-recognition-retail-0013.xml -d CPU Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU pts/openvino-1.1.0 -m models/intel/age-gender-recognition-retail-0013/FP16/age-gender-recognition-retail-0013.xml -d CPU Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU pts/openvino-1.1.0 -m models/intel/weld-porosity-detection-0001/FP16/weld-porosity-detection-0001.xml -d CPU Model: Weld Porosity Detection FP16 - Device: CPU pts/rocksdb-1.3.0 --benchmarks="updaterandom" Test: Update Random pts/rocksdb-1.3.0 --benchmarks="readrandomwriterandom" Test: Read Random Write Random pts/rocksdb-1.3.0 --benchmarks="readwhilewriting" Test: Read While Writing pts/graphics-magick-2.1.0 -sharpen 0x2.0 Operation: Sharpen pts/graphics-magick-2.1.0 -rotate 90 Operation: Rotate pts/rocksdb-1.3.0 --benchmarks="readrandom" Test: Random Read pts/graphics-magick-2.1.0 -operator all Noise-Gaussian 30% Operation: Noise-Gaussian pts/graphics-magick-2.1.0 -enhance Operation: Enhanced pts/graphics-magick-2.1.0 -colorspace HWB Operation: HWB Color Space pts/graphics-magick-2.1.0 -swirl 90 Operation: Swirl pts/webp-1.2.0 -q 100 -lossless -m 6 Encode Settings: Quality 100, Lossless, Highest Compression pts/xmrig-1.1.0 -a rx/wow --bench=1M Variant: Wownero - Hash Count: 1M pts/clickhouse-1.1.0 100M Rows Web Analytics Dataset, Third Run pts/clickhouse-1.1.0 100M Rows Web Analytics Dataset, Second Run pts/clickhouse-1.1.0 100M Rows Web Analytics Dataset, First Run / Cold Cache pts/deepsparse-1.0.1 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario sync Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream pts/primesieve-1.9.0 1e13 Length: 1e13 pts/openradioss-1.0.0 Cell_Phone_Drop_0000.rad Cell_Phone_Drop_0001.rad Model: Cell Phone Drop Test pts/srsran-1.2.0 lib/test/phy/phy_dl_test -p 100 -s 20000 -m 28 -t 4 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM pts/deepsparse-1.0.1 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario sync Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream pts/deepsparse-1.0.1 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario sync Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream pts/deepsparse-1.0.1 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --scenario sync Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream pts/blender-3.3.1 -b ../fishy_cat_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Fishy Cat - Compute: CPU-Only pts/deepsparse-1.0.1 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario async Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.0.1 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario async Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream pts/svt-av1-2.6.0 --preset 4 -n 160 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 4 - Input: Bosphorus 1080p pts/deepsparse-1.0.1 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario async Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream pts/tensorflow-2.0.0 --device cpu --batch_size=16 --model=alexnet Device: CPU - Batch Size: 16 - Model: AlexNet pts/deepsparse-1.0.1 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario sync Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream pts/deepsparse-1.0.1 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario sync Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream pts/jpegxl-decode-1.5.0 --num_reps=200 CPU Threads: All pts/aom-av1-3.5.0 --cpu-used=6 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K pts/deepsparse-1.0.1 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario sync Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream pts/aom-av1-3.5.0 --cpu-used=6 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p pts/blender-3.3.1 -b ../bmw27_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: BMW27 - Compute: CPU-Only pts/aom-av1-3.5.0 --cpu-used=8 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K pts/srsran-1.2.0 lib/test/phy/phy_dl_nr_test -P 52 -p 52 -m 28 -n 20000 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM pts/aom-av1-3.5.0 --cpu-used=0 --limit=20 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p pts/stress-ng-1.6.0 --atomic -1 Test: Atomic pts/natron-1.1.0 Natron_2.3.12_Spaceship/Natron_project/Spaceship_Natron.ntp Input: Spaceship pts/stress-ng-1.6.0 --switch -1 Test: Context Switching pts/stress-ng-1.6.0 --sock -1 Test: Socket Activity pts/stress-ng-1.6.0 --sendfile -1 Test: SENDFILE pts/stress-ng-1.6.0 --futex -1 Test: Futex pts/aircrack-ng-1.3.0 pts/cpuminer-opt-1.6.0 -a skein Algorithm: Skeincoin pts/stress-ng-1.6.0 --io-uring -1 Test: IO_uring pts/stress-ng-1.6.0 --numa -1 Test: NUMA pts/srsran-1.2.0 lib/test/phy/phy_dl_test -p 100 -s 20000 -m 27 -t 1 -q Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM pts/stress-ng-1.6.0 --memcpy -1 Test: Memory Copying pts/stress-ng-1.6.0 --msg -1 Test: System V Message Passing pts/stress-ng-1.6.0 --fork -1 Test: Forking pts/stress-ng-1.6.0 --cache -1 Test: CPU Cache pts/stress-ng-1.6.0 --malloc -1 Test: Malloc pts/stress-ng-1.6.0 --memfd -1 Test: MEMFD pts/cpuminer-opt-1.6.0 -a x25x Algorithm: x25x pts/stress-ng-1.6.0 --crypt -1 Test: Crypto pts/stress-ng-1.6.0 --qsort -1 Test: Glibc Qsort Data Sorting pts/stress-ng-1.6.0 --cpu -1 --cpu-method all Test: CPU Stress pts/stress-ng-1.6.0 --str -1 Test: Glibc C String Functions pts/stress-ng-1.6.0 --mmap -1 Test: MMAP pts/stress-ng-1.6.0 --vecmath -1 Test: Vector Math pts/stress-ng-1.6.0 --matrix -1 Test: Matrix Math pts/stress-ng-1.6.0 --sem -1 Test: Semaphores pts/stress-ng-1.6.0 --mutex -1 Test: Mutex pts/cpuminer-opt-1.6.0 -a minotaur Algorithm: Ringcoin pts/cpuminer-opt-1.6.0 -a sha256q Algorithm: Quad SHA-256, Pyrite pts/aom-av1-3.5.0 --cpu-used=9 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K pts/cpuminer-opt-1.6.0 -a scrypt Algorithm: scrypt pts/cpuminer-opt-1.6.0 -a deep Algorithm: Deepcoin pts/cpuminer-opt-1.6.0 -a blake2s Algorithm: Blake-2 S pts/cpuminer-opt-1.6.0 -a sha256t Algorithm: Triple SHA-256, Onecoin pts/cpuminer-opt-1.6.0 -a m7m Algorithm: Magi pts/cpuminer-opt-1.6.0 -a lbry Algorithm: LBC, LBRY Credits pts/minibude-1.0.0 --deck ../data/bm1 --iterations 500 Implementation: OpenMP - Input Deck: BM1 pts/aom-av1-3.5.0 --cpu-used=10 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K pts/encode-flac-1.8.1 WAV To FLAC pts/astcenc-1.4.0 -exhaustive -repeats 2 Preset: Exhaustive pts/y-cruncher-1.2.0 1b Pi Digits To Calculate: 1B pts/srsran-1.2.0 lib/test/phy/phy_dl_test -p 100 -s 20000 -m 28 -t 1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM pts/svt-av1-2.6.0 --preset 8 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 8 - Input: Bosphorus 4K pts/webp-1.2.0 -q 100 -lossless Encode Settings: Quality 100, Lossless pts/build-python-1.0.0 Build Configuration: Default pts/onednn-2.7.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=f32 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-2.7.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=u8s8f32 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/astcenc-1.4.0 -thorough -repeats 10 Preset: Thorough pts/aom-av1-3.5.0 --cpu-used=6 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p pts/smhasher-1.1.0 --test=Speed FarmHash128 Hash: FarmHash128 pts/smhasher-1.1.0 --test=Speed MeowHash Hash: MeowHash x86_64 AES-NI pts/aom-av1-3.5.0 --cpu-used=8 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p pts/onednn-2.7.0 --ip --batch=inputs/ip/shapes_1d --cfg=f32 --engine=cpu Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/svt-av1-2.6.0 --preset 10 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 10 - Input: Bosphorus 4K pts/onednn-2.7.0 --ip --batch=inputs/ip/shapes_1d --cfg=u8s8f32 --engine=cpu Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/smhasher-1.1.0 --test=Speed Spooky32 Hash: Spooky32 pts/aom-av1-3.5.0 --cpu-used=10 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p pts/astcenc-1.4.0 -fast -repeats 120 Preset: Fast pts/y-cruncher-1.2.0 500m Pi Digits To Calculate: 500M pts/aom-av1-3.5.0 --cpu-used=9 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p pts/smhasher-1.1.0 --test=Speed FarmHash32 Hash: FarmHash32 x86_64 AVX pts/onednn-2.7.0 --matmul --batch=inputs/matmul/shapes_transformer --cfg=f32 --engine=cpu Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-2.7.0 --matmul --batch=inputs/matmul/shapes_transformer --cfg=u8s8f32 --engine=cpu Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/smhasher-1.1.0 --test=Speed fasthash32 Hash: fasthash32 pts/svt-av1-2.6.0 --preset 12 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 12 - Input: Bosphorus 4K pts/smhasher-1.1.0 --test=Speed t1ha2_atonce Hash: t1ha2_atonce pts/unpack-linux-1.2.0 linux-5.19.tar.xz pts/smhasher-1.1.0 --test=Speed t1ha0_aes_avx2 Hash: t1ha0_aes_avx2 x86_64 pts/webp-1.2.0 -q 100 -m 6 Encode Settings: Quality 100, Highest Compression pts/avifenc-1.3.0 -s 6 -l Encoder Speed: 6, Lossless pts/onednn-2.7.0 --ip --batch=inputs/ip/shapes_3d --cfg=f32 --engine=cpu Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-2.7.0 --ip --batch=inputs/ip/shapes_3d --cfg=u8s8f32 --engine=cpu Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/svt-av1-2.6.0 --preset 8 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 8 - Input: Bosphorus 1080p pts/smhasher-1.1.0 --test=Speed wyhash Hash: wyhash pts/astcenc-1.4.0 -medium -repeats 20 Preset: Medium pts/avifenc-1.3.0 -s 10 -l Encoder Speed: 10, Lossless pts/onednn-2.7.0 --conv --batch=inputs/conv/shapes_auto --cfg=f32 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-2.7.0 --conv --batch=inputs/conv/shapes_auto --cfg=u8s8f32 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/avifenc-1.3.0 -s 6 Encoder Speed: 6 pts/svt-av1-2.6.0 --preset 10 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 10 - Input: Bosphorus 1080p pts/primesieve-1.9.0 1e12 Length: 1e12 pts/webp2-1.2.0 Encode Settings: Default pts/svt-av1-2.6.0 --preset 12 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 12 - Input: Bosphorus 1080p pts/webp2-1.2.0 -q 100 -effort 5 Encode Settings: Quality 100, Compression Effort 5 pts/webp-1.2.0 -q 100 Encode Settings: Quality 100 pts/onednn-2.7.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=f32 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-2.7.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=u8s8f32 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/webp-1.2.0 Encode Settings: Default pts/lammps-1.4.0 in.rhodo Model: Rhodopsin Protein pts/nginx-3.0.0 -c 4000 Connections: 4000 pts/nginx-3.0.0 -c 100 Connections: 100 pts/nginx-3.0.0 -c 20 Connections: 20 pts/nginx-3.0.0 -c 1 Connections: 1 pts/stress-ng-1.6.0 --rdrand -1 Test: x86_64 RdRand pts/onednn-2.7.0 --matmul --batch=inputs/matmul/shapes_transformer --cfg=bf16bf16bf16 --engine=cpu Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=bf16bf16bf16 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=bf16bf16bf16 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.0 --conv --batch=inputs/conv/shapes_auto --cfg=bf16bf16bf16 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.0 --ip --batch=inputs/ip/shapes_3d --cfg=bf16bf16bf16 --engine=cpu Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.0 --ip --batch=inputs/ip/shapes_1d --cfg=bf16bf16bf16 --engine=cpu Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU