dddas

AMD Ryzen Threadripper 3970X 32-Core testing with a ASUS ROG ZENITH II EXTREME (1603 BIOS) and AMD Radeon RX 5700 8GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2306249-NE-DDDAS226146
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 3 Tests
C/C++ Compiler Tests 3 Tests
CPU Massive 5 Tests
Creator Workloads 9 Tests
Database Test Suite 2 Tests
Encoding 4 Tests
Fortran Tests 5 Tests
HPC - High Performance Computing 10 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 3 Tests
MPI Benchmarks 4 Tests
Multi-Core 8 Tests
Intel oneAPI 4 Tests
OpenMPI Tests 11 Tests
Python Tests 6 Tests
Scientific Computing 5 Tests
Server 2 Tests
Server CPU Tests 5 Tests
Video Encoding 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
June 23 2023
  13 Hours, 58 Minutes
b
June 24 2023
  4 Hours, 14 Minutes
Invert Hiding All Results Option
  9 Hours, 6 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


dddas Suite 1.0.0 System Test suite extracted from dddas. pts/whisper-cpp-1.0.0 -m models/ggml-medium.en.bin -f ../2016-state-of-the-union.wav Model: ggml-medium.en - Input: 2016 State of the Union pts/whisper-cpp-1.0.0 -m models/ggml-small.en.bin -f ../2016-state-of-the-union.wav Model: ggml-small.en - Input: 2016 State of the Union pts/sqlite-2.2.0 64 Threads / Copies: 64 pts/sqlite-2.2.0 32 Threads / Copies: 32 pts/libxsmm-1.0.1 128 128 128 M N K: 128 pts/sqlite-2.2.0 16 Threads / Copies: 16 pts/sqlite-2.2.0 4 Threads / Copies: 4 pts/onednn-3.1.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=bf16bf16bf16 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/petsc-1.0.0 streams Test: Streams pts/sqlite-2.2.0 8 Threads / Copies: 8 pts/sqlite-2.2.0 2 Threads / Copies: 2 pts/nekrs-1.1.0 kershaw kershaw.par Input: Kershaw pts/nekrs-1.1.0 turbPipePeriodic turbPipe.par Input: TurboPipe Periodic pts/hpcg-1.3.0 --nx=104 --ny=104 --nz=104 --rt=60 X Y Z: 104 104 104 - RT: 60 pts/qmcpack-1.6.0 tests/molecules/O_ae_pyscf_UHF vmc_long_noj.in.xml Input: FeCO6_b3lyp_gms pts/libxsmm-1.0.1 256 256 256 M N K: 256 pts/mocassin-1.1.0 dust/2D/tau100.0 Input: Dust 2D tau100.0 pts/qmcpack-1.6.0 tests/molecules/FeCO6_b3lyp_gms vmc_long_noj.in.xml Input: FeCO6_b3lyp_gms pts/palabos-1.0.0 100 Grid Size: 100 pts/ospray-2.12.0 --benchmark_filter=particle_volume/scivis/real_time Benchmark: particle_volume/scivis/real_time pts/whisper-cpp-1.0.0 -m models/ggml-base.en.bin -f ../2016-state-of-the-union.wav Model: ggml-base.en - Input: 2016 State of the Union pts/ospray-2.12.0 --benchmark_filter=particle_volume/pathtracer/real_time Benchmark: particle_volume/pathtracer/real_time pts/palabos-1.0.0 400 Grid Size: 400 pts/palabos-1.0.0 500 Grid Size: 500 pts/qmcpack-1.6.0 tests/molecules/Li2_STO_ae Li2.STO.long.in.xml Input: Li2_STO_ae pts/leveldb-1.1.0 --benchmarks=fillseq --num=500000 Benchmark: Sequential Fill pts/xonotic-1.7.0 +vid_width 3840 +vid_height 2160 +exec effects-ultimate.cfg Resolution: 3840 x 2160 - Effects Quality: Ultimate pts/leveldb-1.1.0 --benchmarks=deleterandom --num=500000 Benchmark: Random Delete pts/heffte-1.0.0 c2c fftw double-long 512 512 512 Test: c2c - Backend: FFTW - Precision: double-long - X Y Z: 512 pts/heffte-1.0.0 c2c stock double-long 512 512 512 Test: c2c - Backend: Stock - Precision: double-long - X Y Z: 512 pts/stress-ng-1.10.0 --sock -1 --no-rand-seed --sock-zerocopy Test: Socket Activity pts/stress-ng-1.10.0 --pipe -1 --no-rand-seed Test: Pipe pts/laghos-1.0.0 -p 1 -m data/cube_922_hex.mesh -rs 2 -tf 0.6 -no-vis -pa Test: Sedov Blast Wave, ube_922_hex.mesh pts/gpaw-1.2.0 carbon-nanotube Input: Carbon Nanotube pts/vvenc-1.8.0 -i Bosphorus_3840x2160.y4m --preset fast Video Input: Bosphorus 4K - Video Preset: Fast pts/sqlite-2.2.0 1 Threads / Copies: 1 pts/ospray-2.12.0 --benchmark_filter=particle_volume/ao/real_time Benchmark: particle_volume/ao/real_time pts/xonotic-1.7.0 +vid_width 2560 +vid_height 1440 +exec effects-ultimate.cfg Resolution: 2560 x 1440 - Effects Quality: Ultimate pts/xonotic-1.7.0 +vid_width 1920 +vid_height 1200 +exec effects-ultimate.cfg Resolution: 1920 x 1200 - Effects Quality: Ultimate pts/xonotic-1.7.0 +vid_width 1920 +vid_height 1080 +exec effects-ultimate.cfg Resolution: 1920 x 1080 - Effects Quality: Ultimate pts/xonotic-1.7.0 +vid_width 3840 +vid_height 2160 +exec effects-ultra.cfg Resolution: 3840 x 2160 - Effects Quality: Ultra pts/onednn-3.1.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=f32 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-3.1.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=bf16bf16bf16 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.1.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=u8s8f32 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/xonotic-1.7.0 +vid_width 3840 +vid_height 2160 +exec effects-high.cfg Resolution: 3840 x 2160 - Effects Quality: High pts/onednn-3.1.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=u8s8f32 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.1.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=f32 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/xonotic-1.7.0 +vid_width 1920 +vid_height 1080 +exec effects-ultra.cfg Resolution: 1920 x 1080 - Effects Quality: Ultra pts/xonotic-1.7.0 +vid_width 1920 +vid_height 1200 +exec effects-ultra.cfg Resolution: 1920 x 1200 - Effects Quality: Ultra pts/xonotic-1.7.0 +vid_width 2560 +vid_height 1440 +exec effects-ultra.cfg Resolution: 2560 x 1440 - Effects Quality: Ultra pts/z3-1.0.0 2.smt2 SMT File: 2.smt2 pts/ospray-2.12.0 --benchmark_filter=gravity_spheres_volume/dim_512/scivis/real_time Benchmark: gravity_spheres_volume/dim_512/scivis/real_time pts/xonotic-1.7.0 +vid_width 1920 +vid_height 1080 +exec effects-high.cfg Resolution: 1920 x 1080 - Effects Quality: High pts/xonotic-1.7.0 +vid_width 2560 +vid_height 1440 +exec effects-high.cfg Resolution: 2560 x 1440 - Effects Quality: High pts/xonotic-1.7.0 +vid_width 1920 +vid_height 1200 +exec effects-high.cfg Resolution: 1920 x 1200 - Effects Quality: High pts/ospray-2.12.0 --benchmark_filter=gravity_spheres_volume/dim_512/ao/real_time Benchmark: gravity_spheres_volume/dim_512/ao/real_time pts/heffte-1.0.0 r2c fftw double-long 512 512 512 Test: r2c - Backend: FFTW - Precision: double-long - X Y Z: 512 pts/leveldb-1.1.0 --benchmarks=seekrandom --num=1000000 Benchmark: Seek Random pts/ospray-2.12.0 --benchmark_filter=gravity_spheres_volume/dim_512/pathtracer/real_time Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time pts/heffte-1.0.0 r2c stock double-long 512 512 512 Test: r2c - Backend: Stock - Precision: double-long - X Y Z: 512 pts/deepsparse-1.5.0 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario async Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream pts/xonotic-1.7.0 +vid_width 3840 +vid_height 2160 +exec effects-low.cfg Resolution: 3840 x 2160 - Effects Quality: Low pts/cp2k-1.4.1 -i benchmarks/Fayalite-FIST/fayalite.inp Input: Fayalite-FIST pts/xonotic-1.7.0 +vid_width 1920 +vid_height 1080 +exec effects-low.cfg Resolution: 1920 x 1080 - Effects Quality: Low pts/xonotic-1.7.0 +vid_width 1920 +vid_height 1200 +exec effects-low.cfg Resolution: 1920 x 1200 - Effects Quality: Low pts/xonotic-1.7.0 +vid_width 2560 +vid_height 1440 +exec effects-low.cfg Resolution: 2560 x 1440 - Effects Quality: Low pts/deepsparse-1.5.0 zoo:nlp/sentiment_analysis/bert-base/pytorch/huggingface/sst2/12layer_pruned90-none --scenario async Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.0 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario async Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.0 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario async Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream pts/onednn-3.1.0 --ip --batch=inputs/ip/shapes_1d --cfg=u8s8f32 --engine=cpu Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/deepsparse-1.5.0 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --scenario async Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream pts/vvenc-1.8.0 -i Bosphorus_3840x2160.y4m --preset faster Video Input: Bosphorus 4K - Video Preset: Faster pts/deepsparse-1.5.0 zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none --scenario async Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream pts/kripke-1.2.0 pts/deepsparse-1.5.0 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario sync Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.0 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --scenario sync Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.0 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario sync Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.0 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario sync Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream pts/laghos-1.0.0 -p 3 -m data/box01_hex.mesh -rs 2 -tf 5.0 -vis -pa Test: Triple Point Problem pts/deepsparse-1.5.0 zoo:nlp/sentiment_analysis/bert-base/pytorch/huggingface/sst2/12layer_pruned90-none --scenario sync Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream pts/svt-av1-2.9.0 --preset 4 -n 160 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 4 - Input: Bosphorus 4K pts/vvenc-1.8.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset fast Video Input: Bosphorus 1080p - Video Preset: Fast pts/leveldb-1.1.0 --benchmarks=readrandom --num=1000000 Benchmark: Random Read pts/leveldb-1.1.0 --benchmarks=readhot --num=1000000 Benchmark: Hot Read pts/deepsparse-1.5.0 zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none --scenario sync Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream pts/encode-opus-1.4.0 WAV To Opus Encode pts/onednn-3.1.0 --ip --batch=inputs/ip/shapes_1d --cfg=f32 --engine=cpu Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/deepsparse-1.5.0 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario async Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.0 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario sync Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream pts/espeak-1.7.0 Text-To-Speech Synthesis pts/deepsparse-1.5.0 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario async Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.0 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario sync Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.0 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario async Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream pts/stress-ng-1.10.0 --futex -1 --no-rand-seed Test: Futex pts/deepsparse-1.5.0 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario sync Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream pts/libxsmm-1.0.1 64 64 64 M N K: 64 pts/libxsmm-1.0.1 32 32 32 M N K: 32 pts/oidn-2.0.0 -r RTLightmap.hdr.4096x4096 -d cpu Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only pts/stress-ng-1.10.0 --io-uring -1 --no-rand-seed Test: IO_uring pts/stress-ng-1.10.0 --mmap -1 --no-rand-seed Test: MMAP pts/stress-ng-1.10.0 --malloc -1 --no-rand-seed Test: Malloc pts/stress-ng-1.10.0 --clone -1 --no-rand-seed Test: Cloning pts/stress-ng-1.10.0 --memfd -1 --no-rand-seed Test: MEMFD pts/stress-ng-1.10.0 --atomic -1 --no-rand-seed Test: Atomic pts/stress-ng-1.10.0 --cache -1 --no-rand-seed Test: CPU Cache pts/liquid-dsp-1.6.0 -n 64 -b 256 -f 512 Threads: 64 - Buffer Length: 256 - Filter Length: 512 pts/liquid-dsp-1.6.0 -n 8 -b 256 -f 512 Threads: 8 - Buffer Length: 256 - Filter Length: 512 pts/stress-ng-1.10.0 --zlib -1 --no-rand-seed Test: Zlib pts/liquid-dsp-1.6.0 -n 32 -b 256 -f 512 Threads: 32 - Buffer Length: 256 - Filter Length: 512 pts/stress-ng-1.10.0 --pthread -1 --no-rand-seed Test: Pthread pts/liquid-dsp-1.6.0 -n 8 -b 256 -f 32 Threads: 8 - Buffer Length: 256 - Filter Length: 32 pts/liquid-dsp-1.6.0 -n 8 -b 256 -f 57 Threads: 8 - Buffer Length: 256 - Filter Length: 57 pts/stress-ng-1.10.0 --memcpy -1 --no-rand-seed Test: Memory Copying pts/stress-ng-1.10.0 --numa -1 --no-rand-seed Test: NUMA pts/liquid-dsp-1.6.0 -n 16 -b 256 -f 512 Threads: 16 - Buffer Length: 256 - Filter Length: 512 pts/stress-ng-1.10.0 --matrix-3d -1 --no-rand-seed Test: Matrix 3D Math pts/stress-ng-1.10.0 --vecshuf -1 --no-rand-seed Test: Vector Shuffle pts/stress-ng-1.10.0 --funccall -1 --no-rand-seed Test: Function Call pts/stress-ng-1.10.0 --sem -1 --no-rand-seed Test: Semaphores pts/stress-ng-1.10.0 --vecwide -1 --no-rand-seed Test: Wide Vector Math pts/stress-ng-1.10.0 --vecfp -1 --no-rand-seed Test: Vector Floating Point pts/stress-ng-1.10.0 --str -1 --no-rand-seed Test: Glibc C String Functions pts/liquid-dsp-1.6.0 -n 64 -b 256 -f 57 Threads: 64 - Buffer Length: 256 - Filter Length: 57 pts/stress-ng-1.10.0 --msg -1 --no-rand-seed Test: System V Message Passing pts/stress-ng-1.10.0 --fp -1 --no-rand-seed Test: Floating Point pts/liquid-dsp-1.6.0 -n 4 -b 256 -f 512 Threads: 4 - Buffer Length: 256 - Filter Length: 512 pts/stress-ng-1.10.0 --poll -1 --no-rand-seed Test: Poll pts/liquid-dsp-1.6.0 -n 64 -b 256 -f 32 Threads: 64 - Buffer Length: 256 - Filter Length: 32 pts/stress-ng-1.10.0 --mutex -1 --no-rand-seed Test: Mutex pts/stress-ng-1.10.0 --tree -1 --tree-method avl --no-rand-seed Test: AVL Tree pts/stress-ng-1.10.0 --crypt -1 --no-rand-seed Test: Crypto pts/liquid-dsp-1.6.0 -n 32 -b 256 -f 57 Threads: 32 - Buffer Length: 256 - Filter Length: 57 pts/stress-ng-1.10.0 --switch -1 --no-rand-seed Test: Context Switching pts/stress-ng-1.10.0 --fork -1 --no-rand-seed Test: Forking pts/stress-ng-1.10.0 --vecmath -1 --no-rand-seed Test: Vector Math pts/stress-ng-1.10.0 --matrix -1 --no-rand-seed Test: Matrix Math pts/stress-ng-1.10.0 --hash -1 --no-rand-seed Test: Hash pts/stress-ng-1.10.0 --qsort -1 --no-rand-seed Test: Glibc Qsort Data Sorting pts/stress-ng-1.10.0 --cpu -1 --cpu-method all --no-rand-seed Test: CPU Stress pts/stress-ng-1.10.0 --sendfile -1 --no-rand-seed Test: SENDFILE pts/stress-ng-1.10.0 --fma -1 --no-rand-seed Test: Fused Multiply-Add pts/liquid-dsp-1.6.0 -n 32 -b 256 -f 32 Threads: 32 - Buffer Length: 256 - Filter Length: 32 pts/liquid-dsp-1.6.0 -n 2 -b 256 -f 512 Threads: 2 - Buffer Length: 256 - Filter Length: 512 pts/liquid-dsp-1.6.0 -n 16 -b 256 -f 57 Threads: 16 - Buffer Length: 256 - Filter Length: 57 pts/liquid-dsp-1.6.0 -n 16 -b 256 -f 32 Threads: 16 - Buffer Length: 256 - Filter Length: 32 pts/liquid-dsp-1.6.0 -n 1 -b 256 -f 512 Threads: 1 - Buffer Length: 256 - Filter Length: 512 pts/liquid-dsp-1.6.0 -n 1 -b 256 -f 32 Threads: 1 - Buffer Length: 256 - Filter Length: 32 pts/liquid-dsp-1.6.0 -n 2 -b 256 -f 32 Threads: 2 - Buffer Length: 256 - Filter Length: 32 pts/liquid-dsp-1.6.0 -n 4 -b 256 -f 57 Threads: 4 - Buffer Length: 256 - Filter Length: 57 pts/liquid-dsp-1.6.0 -n 4 -b 256 -f 32 Threads: 4 - Buffer Length: 256 - Filter Length: 32 pts/liquid-dsp-1.6.0 -n 2 -b 256 -f 57 Threads: 2 - Buffer Length: 256 - Filter Length: 57 pts/liquid-dsp-1.6.0 -n 1 -b 256 -f 57 Threads: 1 - Buffer Length: 256 - Filter Length: 57 pts/z3-1.0.0 1.smt2 SMT File: 1.smt2 pts/embree-1.5.0 pathtracer_ispc -c asian_dragon_obj/asian_dragon.ecs Binary: Pathtracer ISPC - Model: Asian Dragon Obj pts/qmcpack-1.6.0 build/examples/molecules/H2O/example_H2O-1-1 simple-H2O.xml Input: simple-H2O pts/embree-1.5.0 pathtracer -c asian_dragon_obj/asian_dragon.ecs Binary: Pathtracer - Model: Asian Dragon Obj pts/leveldb-1.1.0 --benchmarks=fillrandom --num=100000 Benchmark: Random Fill pts/leveldb-1.1.0 --benchmarks=overwrite --num=100000 Benchmark: Overwrite pts/vvenc-1.8.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset faster Video Input: Bosphorus 1080p - Video Preset: Faster pts/dav1d-1.14.0 -i chimera_10b_1080p.ivf Video Input: Chimera 1080p 10-bit pts/remhos-1.0.0 -m ./data/inline-quad.mesh -p 14 -rs 2 -rp 1 -dt 0.0005 -tf 0.6 -ho 1 -lo 2 -fct 3 Test: Sample Remap Example pts/dav1d-1.14.0 -i chimera_8b_1080p.ivf Video Input: Chimera 1080p pts/cp2k-1.4.1 -i benchmarks/QS/H2O-64.inp Input: H20-64 pts/onednn-3.1.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=f32 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-3.1.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=u8s8f32 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/embree-1.5.0 pathtracer_ispc -c crown/crown.ecs Binary: Pathtracer ISPC - Model: Crown pts/embree-1.5.0 pathtracer -c crown/crown.ecs Binary: Pathtracer - Model: Crown pts/oidn-2.0.0 -r RT.hdr_alb_nrm.3840x2160 -d cpu Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only pts/embree-1.5.0 pathtracer_ispc -c asian_dragon/asian_dragon.ecs Binary: Pathtracer ISPC - Model: Asian Dragon pts/oidn-2.0.0 -r RT.ldr_alb_nrm.3840x2160 -d cpu Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only pts/dav1d-1.14.0 -i summer_nature_4k.ivf Video Input: Summer Nature 4K pts/svt-av1-2.9.0 --preset 4 -n 160 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 4 - Input: Bosphorus 1080p pts/embree-1.5.0 pathtracer -c asian_dragon/asian_dragon.ecs Binary: Pathtracer - Model: Asian Dragon pts/heffte-1.0.0 c2c fftw double-long 256 256 256 Test: c2c - Backend: FFTW - Precision: double-long - X Y Z: 256 pts/heffte-1.0.0 c2c stock double-long 256 256 256 Test: c2c - Backend: Stock - Precision: double-long - X Y Z: 256 pts/hpcg-1.3.0 --nx=144 --ny=144 --nz=144 --rt=60 X Y Z: 144 144 144 - RT: 60 pts/mocassin-1.1.0 gas/HII40 Input: Gas HII40 pts/svt-av1-2.9.0 --preset 8 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 8 - Input: Bosphorus 4K pts/leveldb-1.1.0 --benchmarks=fillsync --num=1000000 Benchmark: Fill Sync pts/cp2k-1.4.1 -i benchmarks/QS_DM_LS/H2O-dft-ls.inp Input: H2O-DFT-LS pts/palabos-1.0.0 1000 Grid Size: 1000 pts/onednn-3.1.0 --ip --batch=inputs/ip/shapes_3d --cfg=f32 --engine=cpu Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-3.1.0 --ip --batch=inputs/ip/shapes_3d --cfg=u8s8f32 --engine=cpu Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/svt-av1-2.9.0 --preset 8 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 8 - Input: Bosphorus 1080p pts/heffte-1.0.0 r2c fftw double-long 256 256 256 Test: r2c - Backend: FFTW - Precision: double-long - X Y Z: 256 pts/svt-av1-2.9.0 --preset 12 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 12 - Input: Bosphorus 4K pts/heffte-1.0.0 r2c stock double-long 256 256 256 Test: r2c - Backend: Stock - Precision: double-long - X Y Z: 256 pts/svt-av1-2.9.0 --preset 13 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 13 - Input: Bosphorus 4K pts/onednn-3.1.0 --conv --batch=inputs/conv/shapes_auto --cfg=u8s8f32 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.1.0 --conv --batch=inputs/conv/shapes_auto --cfg=f32 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/dav1d-1.14.0 -i summer_nature_1080p.ivf Video Input: Summer Nature 1080p pts/heffte-1.0.0 r2c fftw double-long 128 128 128 Test: r2c - Backend: FFTW - Precision: double-long - X Y Z: 128 pts/onednn-3.1.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=f32 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-3.1.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=u8s8f32 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/svt-av1-2.9.0 --preset 12 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 12 - Input: Bosphorus 1080p pts/svt-av1-2.9.0 --preset 13 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 13 - Input: Bosphorus 1080p pts/palabos-1.0.0 4000 Grid Size: 4000 pts/heffte-1.0.0 c2c stock double-long 128 128 128 Test: c2c - Backend: Stock - Precision: double-long - X Y Z: 128 pts/heffte-1.0.0 c2c fftw double-long 128 128 128 Test: c2c - Backend: FFTW - Precision: double-long - X Y Z: 128 pts/heffte-1.0.0 r2c stock double-long 128 128 128 Test: r2c - Backend: Stock - Precision: double-long - X Y Z: 128 pts/onednn-3.1.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=bf16bf16bf16 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/stress-ng-1.10.0 --rdrand -1 --no-rand-seed Test: x86_64 RdRand pts/onednn-3.1.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=bf16bf16bf16 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.1.0 --conv --batch=inputs/conv/shapes_auto --cfg=bf16bf16bf16 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.1.0 --ip --batch=inputs/ip/shapes_3d --cfg=bf16bf16bf16 --engine=cpu Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.1.0 --ip --batch=inputs/ip/shapes_1d --cfg=bf16bf16bf16 --engine=cpu Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/oidn-2.0.0 -r RTLightmap.hdr.4096x4096 -d sycl Run: RTLightmap.hdr.4096x4096 - Device: Intel oneAPI SYCL pts/oidn-2.0.0 -r RT.ldr_alb_nrm.3840x2160 -d sycl Run: RT.ldr_alb_nrm.3840x2160 - Device: Intel oneAPI SYCL pts/oidn-2.0.0 -r RT.hdr_alb_nrm.3840x2160 -d sycl Run: RT.hdr_alb_nrm.3840x2160 - Device: Intel oneAPI SYCL pts/oidn-2.0.0 -r RTLightmap.hdr.4096x4096 -d hip Run: RTLightmap.hdr.4096x4096 - Device: Radeon HIP pts/oidn-2.0.0 -r RT.ldr_alb_nrm.3840x2160 -d hip Run: RT.ldr_alb_nrm.3840x2160 - Device: Radeon HIP pts/oidn-2.0.0 -r RT.hdr_alb_nrm.3840x2160 -d hip Run: RT.hdr_alb_nrm.3840x2160 - Device: Radeon HIP