eptc-7f32

AMD EPYC 7F32 8-Core testing with a ASRockRack EPYCD8 (P2.40 BIOS) and ASPEED on Debian 11 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2211207-NE-EPTC7F32776
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
C++ Boost Tests 2 Tests
Timed Code Compilation 4 Tests
C/C++ Compiler Tests 7 Tests
Compression Tests 2 Tests
CPU Massive 12 Tests
Creator Workloads 14 Tests
Cryptocurrency Benchmarks, CPU Mining Tests 2 Tests
Cryptography 3 Tests
Encoding 4 Tests
HPC - High Performance Computing 11 Tests
Imaging 6 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 7 Tests
Multi-Core 14 Tests
Intel oneAPI 2 Tests
OpenMPI Tests 3 Tests
Programmer / Developer System Benchmarks 5 Tests
Python Tests 8 Tests
Renderers 2 Tests
Server 2 Tests
Server CPU Tests 7 Tests
Single-Threaded 2 Tests
Video Encoding 3 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
EPYC 7F32
November 20 2022
  6 Hours, 7 Minutes
AMD EPYC 7F32
November 20 2022
  6 Hours, 34 Minutes
Invert Hiding All Results Option
  6 Hours, 20 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


eptc-7f32 Suite 1.0.0 System Test suite extracted from eptc-7f32 . pts/tensorflow-2.0.0 --device cpu --batch_size=256 --model=resnet50 Device: CPU - Batch Size: 256 - Model: ResNet-50 pts/webp2-1.2.0 -q 100 -effort 9 Encode Settings: Quality 100, Lossless Compression pts/brl-cad-1.3.0 VGR Performance Metric pts/tensorflow-2.0.0 --device cpu --batch_size=256 --model=googlenet Device: CPU - Batch Size: 256 - Model: GoogLeNet pts/tensorflow-2.0.0 --device cpu --batch_size=512 --model=alexnet Device: CPU - Batch Size: 512 - Model: AlexNet pts/tensorflow-2.0.0 --device cpu --batch_size=64 --model=resnet50 Device: CPU - Batch Size: 64 - Model: ResNet-50 pts/openradioss-1.0.0 fsi_drop_container_0000.rad fsi_drop_container_0001.rad Model: INIVOL and Fluid Structure Interaction Drop Container pts/build-nodejs-1.2.0 Time To Compile pts/smhasher-1.1.0 --test=Speed sha3-256 Hash: SHA3-256 pts/minibude-1.0.0 --deck ../data/bm2 --iterations 10 Implementation: OpenMP - Input Deck: BM2 pts/webp2-1.2.0 -q 95 -effort 7 Encode Settings: Quality 95, Compression Effort 7 pts/jpegxl-1.5.0 --lossless_jpeg=0 sample-photo-6000x4000.JPG out.jxl -q 100 --num_reps 10 Input: JPEG - Quality: 100 pts/jpegxl-1.5.0 sample-4.png out.jxl -q 100 --num_reps 10 Input: PNG - Quality: 100 pts/tensorflow-2.0.0 --device cpu --batch_size=32 --model=resnet50 Device: CPU - Batch Size: 32 - Model: ResNet-50 pts/tensorflow-2.0.0 --device cpu --batch_size=256 --model=alexnet Device: CPU - Batch Size: 256 - Model: AlexNet pts/build-python-1.0.0 --enable-optimizations --with-lto Build Configuration: Released Build, PGO + LTO Optimized pts/openradioss-1.0.0 BIRD_WINDSHIELD_v1_0000.rad BIRD_WINDSHIELD_v1_0001.rad Model: Bird Strike on Windshield pts/ffmpeg-3.0.0 --encoder=libx264 upload Encoder: libx264 - Scenario: Upload pts/ffmpeg-3.0.0 --encoder=libx265 vod Encoder: libx265 - Scenario: Video On Demand pts/ffmpeg-3.0.0 --encoder=libx265 platform Encoder: libx265 - Scenario: Platform pts/ffmpeg-3.0.0 --encoder=libx265 upload Encoder: libx265 - Scenario: Upload pts/mnn-2.1.0 Model: inception-v3 pts/mnn-2.1.0 Model: mobilenet-v1-1.0 pts/mnn-2.1.0 Model: MobileNetV2_224 pts/mnn-2.1.0 Model: SqueezeNetV1.0 pts/mnn-2.1.0 Model: resnet-v2-50 pts/mnn-2.1.0 Model: squeezenetv1.1 pts/mnn-2.1.0 Model: mobilenetV3 pts/mnn-2.1.0 Model: nasnet pts/openfoam-1.2.0 incompressible/simpleFoam/drivaerFastback/ -m S Input: drivaerFastback, Small Mesh Size - Execution Time pts/openfoam-1.2.0 incompressible/simpleFoam/drivaerFastback/ -m S Input: drivaerFastback, Small Mesh Size - Mesh Time pts/tensorflow-2.0.0 --device cpu --batch_size=64 --model=googlenet Device: CPU - Batch Size: 64 - Model: GoogLeNet pts/ffmpeg-3.0.0 --encoder=libx264 vod Encoder: libx264 - Scenario: Video On Demand pts/ffmpeg-3.0.0 --encoder=libx264 platform Encoder: libx264 - Scenario: Platform pts/webp2-1.2.0 -q 75 -effort 7 Encode Settings: Quality 75, Compression Effort 7 pts/openradioss-1.0.0 RUBBER_SEAL_IMPDISP_GEOM_0000.rad RUBBER_SEAL_IMPDISP_GEOM_0001.rad Model: Rubber O-Ring Seal Installation pts/scikit-learn-1.2.0 random_projections.py --n-times 100 Benchmark: Sparse Random Projections, 100 Iterations pts/tensorflow-2.0.0 --device cpu --batch_size=16 --model=resnet50 Device: CPU - Batch Size: 16 - Model: ResNet-50 pts/openradioss-1.0.0 Bumper_Beam_AP_meshed_0000.rad Bumper_Beam_AP_meshed_0001.rad Model: Bumper Beam pts/avifenc-1.3.0 -s 0 Encoder Speed: 0 pts/blender-3.3.1 -b ../bmw27_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: BMW27 - Compute: CPU-Only pts/jpegxl-1.5.0 --lossless_jpeg=0 sample-photo-6000x4000.JPG out.jxl -q 80 --num_reps 50 Input: JPEG - Quality: 80 pts/xmrig-1.1.0 --bench=1M Variant: Monero - Hash Count: 1M pts/jpegxl-1.5.0 sample-4.png out.jxl -q 80 --num_reps 50 Input: PNG - Quality: 80 pts/aom-av1-3.5.0 --cpu-used=4 Bosphorus_3840x2160.y4m Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K pts/xmrig-1.1.0 -a rx/wow --bench=1M Variant: Wownero - Hash Count: 1M pts/openradioss-1.0.0 Cell_Phone_Drop_0000.rad Cell_Phone_Drop_0001.rad Model: Cell Phone Drop Test pts/build-erlang-1.2.0 Time To Compile pts/scikit-learn-1.2.0 mnist.py Benchmark: MNIST Dataset pts/jpegxl-1.5.0 --lossless_jpeg=0 sample-photo-6000x4000.JPG out.jxl -q 90 --num_reps 40 Input: JPEG - Quality: 90 pts/jpegxl-1.5.0 sample-4.png out.jxl -q 90 --num_reps 40 Input: PNG - Quality: 90 pts/tensorflow-2.0.0 --device cpu --batch_size=32 --model=googlenet Device: CPU - Batch Size: 32 - Model: GoogLeNet pts/spacy-1.0.0 Model: en_core_web_trf pts/spacy-1.0.0 Model: en_core_web_lg pts/aom-av1-3.5.0 --cpu-used=0 --limit=20 Bosphorus_3840x2160.y4m Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K pts/tensorflow-2.0.0 --device cpu --batch_size=64 --model=alexnet Device: CPU - Batch Size: 64 - Model: AlexNet pts/ffmpeg-3.0.0 --encoder=libx265 live Encoder: libx265 - Scenario: Live pts/onednn-2.7.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=u8s8f32 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=bf16bf16bf16 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/nginx-3.0.0 -c 500 Connections: 500 pts/nginx-3.0.0 -c 1000 Connections: 1000 pts/nginx-3.0.0 -c 100 Connections: 100 pts/nginx-3.0.0 -c 200 Connections: 200 pts/nginx-3.0.0 -c 20 Connections: 20 pts/onednn-2.7.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=f32 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/avifenc-1.3.0 -s 2 Encoder Speed: 2 pts/aom-av1-3.5.0 --cpu-used=6 Bosphorus_3840x2160.y4m Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K pts/onednn-2.7.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=f32 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-2.7.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=u8s8f32 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=bf16bf16bf16 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/minibude-1.0.0 --deck ../data/bm1 --iterations 500 Implementation: OpenMP - Input Deck: BM1 pts/openvino-1.1.0 -m models/intel/person-detection-0106/FP16/person-detection-0106.xml -d CPU Model: Person Detection FP16 - Device: CPU pts/openvino-1.1.0 -m models/intel/face-detection-0206/FP16/face-detection-0206.xml -d CPU Model: Face Detection FP16 - Device: CPU pts/deepsparse-1.0.1 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario async Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream pts/openvino-1.1.0 -m models/intel/person-detection-0106/FP32/person-detection-0106.xml -d CPU Model: Person Detection FP32 - Device: CPU pts/build-php-1.6.0 Time To Compile pts/openvino-1.1.0 -m models/intel/face-detection-0206/FP16-INT8/face-detection-0206.xml -d CPU Model: Face Detection FP16-INT8 - Device: CPU pts/deepsparse-1.0.1 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario async Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream pts/openvino-1.1.0 -m models/intel/machine-translation-nar-en-de-0002/FP16/machine-translation-nar-en-de-0002.xml -d CPU Model: Machine Translation EN To DE FP16 - Device: CPU pts/jpegxl-decode-1.5.0 --num_threads=1 --num_reps=100 CPU Threads: 1 pts/openvino-1.1.0 -m models/intel/person-vehicle-bike-detection-2004/FP16/person-vehicle-bike-detection-2004.xml -d CPU Model: Person Vehicle Bike Detection FP16 - Device: CPU pts/deepsparse-1.0.1 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario async Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream pts/openvino-1.1.0 -m models/intel/weld-porosity-detection-0001/FP16/weld-porosity-detection-0001.xml -d CPU Model: Weld Porosity Detection FP16 - Device: CPU pts/openvino-1.1.0 -m models/intel/vehicle-detection-0202/FP16-INT8/vehicle-detection-0202.xml -d CPU Model: Vehicle Detection FP16-INT8 - Device: CPU pts/openvino-1.1.0 -m models/intel/vehicle-detection-0202/FP16/vehicle-detection-0202.xml -d CPU Model: Vehicle Detection FP16 - Device: CPU pts/openvino-1.1.0 -m models/intel/weld-porosity-detection-0001/FP16-INT8/weld-porosity-detection-0001.xml -d CPU Model: Weld Porosity Detection FP16-INT8 - Device: CPU pts/deepsparse-1.0.1 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --scenario async Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream pts/graphics-magick-2.1.0 -sharpen 0x2.0 Operation: Sharpen pts/openvino-1.1.0 -m models/intel/age-gender-recognition-retail-0013/FP16/age-gender-recognition-retail-0013.xml -d CPU Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU pts/openvino-1.1.0 -m models/intel/age-gender-recognition-retail-0013/FP16-INT8/age-gender-recognition-retail-0013.xml -d CPU Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU pts/graphics-magick-2.1.0 -operator all Noise-Gaussian 30% Operation: Noise-Gaussian pts/graphics-magick-2.1.0 -enhance Operation: Enhanced pts/rocksdb-1.3.0 --benchmarks="readrandomwriterandom" Test: Read Random Write Random pts/rocksdb-1.3.0 --benchmarks="updaterandom" Test: Update Random pts/rocksdb-1.3.0 --benchmarks="readwhilewriting" Test: Read While Writing pts/graphics-magick-2.1.0 -swirl 90 Operation: Swirl pts/rocksdb-1.3.0 --benchmarks="readrandom" Test: Random Read pts/graphics-magick-2.1.0 -resize 50% Operation: Resizing pts/graphics-magick-2.1.0 -rotate 90 Operation: Rotate pts/graphics-magick-2.1.0 -colorspace HWB Operation: HWB Color Space pts/aom-av1-3.5.0 --cpu-used=4 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p pts/tensorflow-2.0.0 --device cpu --batch_size=16 --model=googlenet Device: CPU - Batch Size: 16 - Model: GoogLeNet pts/tensorflow-2.0.0 --device cpu --batch_size=32 --model=alexnet Device: CPU - Batch Size: 32 - Model: AlexNet pts/deepsparse-1.0.1 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario sync Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream pts/ffmpeg-3.0.0 --encoder=libx264 live Encoder: libx264 - Scenario: Live pts/deepsparse-1.0.1 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario sync Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream pts/deepsparse-1.0.1 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario sync Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream pts/deepsparse-1.0.1 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --scenario sync Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream pts/natron-1.1.0 Natron_2.3.12_Spaceship/Natron_project/Spaceship_Natron.ntp Input: Spaceship pts/y-cruncher-1.2.0 1b Pi Digits To Calculate: 1B pts/srsran-1.2.0 lib/test/phy/phy_dl_test -p 100 -s 20000 -m 27 -t 4 -q Test: 4G PHY_DL_Test 100 PRB MIMO 256-QAM pts/deepsparse-1.0.1 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario async Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream pts/webp-1.2.0 -q 100 -lossless -m 6 Encode Settings: Quality 100, Lossless, Highest Compression pts/deepsparse-1.0.1 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario async Model: CV Detection,YOLOv5s COCO - Scenario: Asynchronous Multi-Stream pts/srsran-1.2.0 lib/src/phy/dft/test/ofdm_test -N 2048 -n 100 -r 500000 Test: OFDM_Test pts/encodec-1.0.1 -b 24 Target Bandwidth: 24 kbps pts/deepsparse-1.0.1 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario async Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.0.1 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario sync Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream pts/deepsparse-1.0.1 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario sync Model: CV Detection,YOLOv5s COCO - Scenario: Synchronous Single-Stream pts/scikit-learn-1.2.0 tsne_mnist.py Benchmark: TSNE MNIST Dataset pts/srsran-1.2.0 lib/test/phy/phy_dl_test -p 100 -s 20000 -m 28 -t 4 Test: 4G PHY_DL_Test 100 PRB MIMO 64-QAM pts/aom-av1-3.5.0 --cpu-used=0 --limit=20 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p pts/deepsparse-1.0.1 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario sync Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream pts/encodec-1.0.1 -b 3 Target Bandwidth: 3 kbps pts/tensorflow-2.0.0 --device cpu --batch_size=16 --model=alexnet Device: CPU - Batch Size: 16 - Model: AlexNet pts/encodec-1.0.1 -b 6 Target Bandwidth: 6 kbps pts/encodec-1.0.1 -b 1.5 Target Bandwidth: 1.5 kbps pts/cpuminer-opt-1.6.0 -a m7m Algorithm: Magi pts/cpuminer-opt-1.6.0 -a x25x Algorithm: x25x pts/cpuminer-opt-1.6.0 -a minotaur Algorithm: Ringcoin pts/stress-ng-1.6.0 --qsort -1 Test: Glibc Qsort Data Sorting pts/stress-ng-1.6.0 --switch -1 Test: Context Switching pts/stress-ng-1.6.0 --malloc -1 Test: Malloc pts/stress-ng-1.6.0 --numa -1 Test: NUMA pts/stress-ng-1.6.0 --msg -1 Test: System V Message Passing pts/stress-ng-1.6.0 --io-uring -1 Test: IO_uring pts/stress-ng-1.6.0 --atomic -1 Test: Atomic pts/stress-ng-1.6.0 --mmap -1 Test: MMAP pts/stress-ng-1.6.0 --memcpy -1 Test: Memory Copying pts/stress-ng-1.6.0 --futex -1 Test: Futex pts/stress-ng-1.6.0 --matrix -1 Test: Matrix Math pts/stress-ng-1.6.0 --cache -1 Test: CPU Cache pts/stress-ng-1.6.0 --fork -1 Test: Forking pts/stress-ng-1.6.0 --memfd -1 Test: MEMFD pts/stress-ng-1.6.0 --str -1 Test: Glibc C String Functions pts/stress-ng-1.6.0 --sock -1 Test: Socket Activity pts/stress-ng-1.6.0 --vecmath -1 Test: Vector Math pts/stress-ng-1.6.0 --sem -1 Test: Semaphores pts/stress-ng-1.6.0 --cpu -1 --cpu-method all Test: CPU Stress pts/stress-ng-1.6.0 --sendfile -1 Test: SENDFILE pts/stress-ng-1.6.0 --crypt -1 Test: Crypto pts/stress-ng-1.6.0 --mutex -1 Test: Mutex pts/cpuminer-opt-1.6.0 -a myr-gr Algorithm: Myriad-Groestl pts/cpuminer-opt-1.6.0 -a allium Algorithm: Garlicoin pts/cpuminer-opt-1.6.0 -a sha256t Algorithm: Triple SHA-256, Onecoin pts/cpuminer-opt-1.6.0 -a lbry Algorithm: LBC, LBRY Credits pts/cpuminer-opt-1.6.0 -a deep Algorithm: Deepcoin pts/cpuminer-opt-1.6.0 -a sha256q Algorithm: Quad SHA-256, Pyrite pts/cpuminer-opt-1.6.0 -a blake2s Algorithm: Blake-2 S pts/cpuminer-opt-1.6.0 -a skein Algorithm: Skeincoin pts/aom-av1-3.5.0 --cpu-used=6 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K pts/cpuminer-opt-1.6.0 -a scrypt Algorithm: scrypt pts/compress-7zip-1.10.0 Test: Decompression Rating pts/compress-7zip-1.10.0 Test: Compression Rating pts/jpegxl-decode-1.5.0 --num_reps=200 CPU Threads: All pts/aom-av1-3.5.0 --cpu-used=6 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p pts/srsran-1.2.0 lib/test/phy/phy_dl_nr_test -P 52 -p 52 -m 28 -n 20000 Test: 5G PHY_DL_NR Test 52 PRB SISO 64-QAM pts/y-cruncher-1.2.0 500m Pi Digits To Calculate: 500M pts/nekrs-1.0.0 turbPipePeriodic turbPipe.par Input: TurboPipe Periodic pts/srsran-1.2.0 lib/test/phy/phy_dl_test -p 100 -s 20000 -m 27 -t 1 -q Test: 4G PHY_DL_Test 100 PRB SISO 256-QAM pts/encode-flac-1.8.1 WAV To FLAC pts/onednn-2.7.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=f32 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-2.7.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=u8s8f32 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/aom-av1-3.5.0 --cpu-used=8 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K pts/build-python-1.0.0 Build Configuration: Default pts/srsran-1.2.0 lib/test/phy/phy_dl_test -p 100 -s 20000 -m 28 -t 1 Test: 4G PHY_DL_Test 100 PRB SISO 64-QAM pts/webp-1.2.0 -q 100 -lossless Encode Settings: Quality 100, Lossless pts/blosc-1.2.0 blosclz bitshuffle Test: blosclz bitshuffle pts/aom-av1-3.5.0 --cpu-used=9 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K pts/aom-av1-3.5.0 --cpu-used=10 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K pts/onednn-2.7.0 --ip --batch=inputs/ip/shapes_1d --cfg=f32 --engine=cpu Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-2.7.0 --ip --batch=inputs/ip/shapes_1d --cfg=u8s8f32 --engine=cpu Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/smhasher-1.1.0 --test=Speed FarmHash128 Hash: FarmHash128 pts/aom-av1-3.5.0 --cpu-used=6 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p pts/smhasher-1.1.0 --test=Speed MeowHash Hash: MeowHash x86_64 AES-NI pts/avifenc-1.3.0 -s 6 -l Encoder Speed: 6, Lossless pts/onednn-2.7.0 --matmul --batch=inputs/matmul/shapes_transformer --cfg=f32 --engine=cpu Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-2.7.0 --matmul --batch=inputs/matmul/shapes_transformer --cfg=u8s8f32 --engine=cpu Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/smhasher-1.1.0 --test=Speed Spooky32 Hash: Spooky32 pts/smhasher-1.1.0 --test=Speed FarmHash32 Hash: FarmHash32 x86_64 AVX pts/smhasher-1.1.0 --test=Speed fasthash32 Hash: fasthash32 pts/onednn-2.7.0 --ip --batch=inputs/ip/shapes_3d --cfg=f32 --engine=cpu Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-2.7.0 --ip --batch=inputs/ip/shapes_3d --cfg=u8s8f32 --engine=cpu Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/smhasher-1.1.0 --test=Speed t1ha2_atonce Hash: t1ha2_atonce pts/smhasher-1.1.0 --test=Speed t1ha0_aes_avx2 Hash: t1ha0_aes_avx2 x86_64 pts/aom-av1-3.5.0 --cpu-used=8 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p pts/avifenc-1.3.0 -s 6 Encoder Speed: 6 pts/unpack-linux-1.2.0 linux-5.19.tar.xz pts/webp-1.2.0 -q 100 -m 6 Encode Settings: Quality 100, Highest Compression pts/webp2-1.2.0 -q 100 -effort 5 Encode Settings: Quality 100, Compression Effort 5 pts/smhasher-1.1.0 --test=Speed wyhash Hash: wyhash pts/aom-av1-3.5.0 --cpu-used=9 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p pts/avifenc-1.3.0 -s 10 -l Encoder Speed: 10, Lossless pts/onednn-2.7.0 --conv --batch=inputs/conv/shapes_auto --cfg=u8s8f32 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-2.7.0 --conv --batch=inputs/conv/shapes_auto --cfg=f32 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/aom-av1-3.5.0 --cpu-used=10 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p pts/blosc-1.2.0 blosclz shuffle Test: blosclz shuffle pts/webp2-1.2.0 Encode Settings: Default pts/onednn-2.7.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=f32 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-2.7.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=u8s8f32 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/webp-1.2.0 -q 100 Encode Settings: Quality 100 pts/webp-1.2.0 Encode Settings: Default pts/nginx-3.0.0 -c 4000 Connections: 4000 pts/nginx-3.0.0 -c 1 Connections: 1 pts/onednn-2.7.0 --matmul --batch=inputs/matmul/shapes_transformer --cfg=bf16bf16bf16 --engine=cpu Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=bf16bf16bf16 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=bf16bf16bf16 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.0 --conv --batch=inputs/conv/shapes_auto --cfg=bf16bf16bf16 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.0 --ip --batch=inputs/ip/shapes_3d --cfg=bf16bf16bf16 --engine=cpu Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-2.7.0 --ip --batch=inputs/ip/shapes_1d --cfg=bf16bf16bf16 --engine=cpu Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU