extra tests

benchmarks for a future article.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2309060-NE-EXTRATEST87
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
C/C++ Compiler Tests 2 Tests
CPU Massive 5 Tests
Creator Workloads 8 Tests
Database Test Suite 2 Tests
Encoding 3 Tests
Game Development 2 Tests
HPC - High Performance Computing 4 Tests
Machine Learning 2 Tests
Multi-Core 9 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 3 Tests
OpenMPI Tests 5 Tests
Renderers 2 Tests
Server 2 Tests
Server CPU Tests 4 Tests
Video Encoding 3 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Additional Graphs

Show Perf Per Core/Thread Calculation Graphs Where Applicable
Show Perf Per Clock Calculation Graphs Where Applicable

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs
Condense Test Profiles With Multiple Version Results Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
d
August 25 2023
  3 Hours, 8 Minutes
g
August 30 2023
  4 Hours, 25 Minutes
h
August 30 2023
  4 Hours, 23 Minutes
2 x AMD EPYC 9334 32-Core
September 06 2023
  3 Hours, 9 Minutes
9334 2p
September 06 2023
  3 Hours, 11 Minutes
93334 rep
September 06 2023
  3 Hours, 13 Minutes
Invert Hiding All Results Option
  3 Hours, 35 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


extra tests Suite 1.0.0 System Test suite extracted from extra tests. pts/stress-ng-1.11.0 --sem -1 --no-rand-seed Test: Semaphores pts/stress-ng-1.11.0 --pipe -1 --no-rand-seed Test: Pipe pts/deepsparse-1.5.2 zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/pruned97_quant-none --input_shapes='[1,128]' --scenario async Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario async Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario async Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/sentiment_analysis/oberta-base/pytorch/huggingface/sst2/pruned90_quant-none --input_shapes='[1,128]' --scenario async Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/base-none --input_shapes='[1,128]' --scenario async Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/sentiment_analysis/bert-base/pytorch/huggingface/sst2/12layer_pruned90-none --scenario async Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned85-none --scenario async Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario async Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario async Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --input_shapes='[1,128]' --scenario async Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario async Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario async Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream pts/stress-ng-1.11.0 --rdrand -1 --no-rand-seed Test: x86_64 RdRand pts/stress-ng-1.11.0 --hash -1 --no-rand-seed Test: Hash pts/stress-ng-1.11.0 --switch -1 --no-rand-seed Test: Context Switching pts/stress-ng-1.11.0 --vecshuf -1 --no-rand-seed Test: Vector Shuffle pts/deepsparse-1.5.2 zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none --scenario async Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream pts/ncnn-1.5.0 -1 Target: CPU - Model: regnety_400m pts/stress-ng-1.11.0 --fp -1 --no-rand-seed Test: Floating Point pts/stress-ng-1.11.0 --cpu -1 --cpu-method all --no-rand-seed Test: CPU Stress pts/stress-ng-1.11.0 --funccall -1 --no-rand-seed Test: Function Call pts/stress-ng-1.11.0 --matrix -1 --no-rand-seed Test: Matrix Math pts/stress-ng-1.11.0 --vecfp -1 --no-rand-seed Test: Vector Floating Point pts/stress-ng-1.11.0 --fma -1 --no-rand-seed Test: Fused Multiply-Add pts/stress-ng-1.11.0 --zlib -1 --no-rand-seed Test: Zlib pts/deepsparse-1.5.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned95_uniform_quant-none --scenario async Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/blender-3.6.0 -b ../classroom_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Classroom - Compute: CPU-Only pts/stress-ng-1.11.0 --vecmath -1 --no-rand-seed Test: Vector Math pts/blender-3.6.0 -b ../pavillon_barcelone_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Pabellon Barcelona - Compute: CPU-Only pts/stress-ng-1.11.0 --vnni -1 Test: AVX-512 VNNI pts/stress-ng-1.11.0 --memcpy -1 --no-rand-seed Test: Memory Copying pts/stress-ng-1.11.0 --vecwide -1 --no-rand-seed Test: Wide Vector Math pts/stress-ng-1.11.0 --qsort -1 --no-rand-seed Test: Glibc Qsort Data Sorting pts/stress-ng-1.11.0 --malloc -1 --no-rand-seed Test: Malloc pts/stress-ng-1.11.0 --str -1 --no-rand-seed Test: Glibc C String Functions pts/blender-3.6.0 -b ../barbershop_interior_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Barbershop - Compute: CPU-Only pts/blender-3.6.0 -b ../bmw27_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: BMW27 - Compute: CPU-Only pts/ospray-2.12.0 --benchmark_filter=gravity_spheres_volume/dim_512/scivis/real_time Benchmark: gravity_spheres_volume/dim_512/scivis/real_time pts/embree-1.5.0 pathtracer -c asian_dragon/asian_dragon.ecs Binary: Pathtracer - Model: Asian Dragon pts/embree-1.5.0 pathtracer -c asian_dragon_obj/asian_dragon.ecs Binary: Pathtracer - Model: Asian Dragon Obj pts/ospray-2.12.0 --benchmark_filter=gravity_spheres_volume/dim_512/ao/real_time Benchmark: gravity_spheres_volume/dim_512/ao/real_time pts/deepsparse-1.5.2 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario async Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream pts/embree-1.5.0 pathtracer -c crown/crown.ecs Binary: Pathtracer - Model: Crown pts/embree-1.5.0 pathtracer_ispc -c asian_dragon/asian_dragon.ecs Binary: Pathtracer ISPC - Model: Asian Dragon pts/embree-1.5.0 pathtracer_ispc -c crown/crown.ecs Binary: Pathtracer ISPC - Model: Crown pts/stress-ng-1.11.0 --poll -1 --no-rand-seed Test: Poll pts/blender-3.6.0 -b ../fishy_cat_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Fishy Cat - Compute: CPU-Only pts/ncnn-1.5.0 -1 Target: CPU - Model: blazeface pts/embree-1.5.0 pathtracer_ispc -c asian_dragon_obj/asian_dragon.ecs Binary: Pathtracer ISPC - Model: Asian Dragon Obj pts/stress-ng-1.11.0 --sendfile -1 --no-rand-seed Test: SENDFILE pts/ospray-2.12.0 --benchmark_filter=particle_volume/ao/real_time Benchmark: particle_volume/ao/real_time pts/ospray-2.12.0 --benchmark_filter=particle_volume/scivis/real_time Benchmark: particle_volume/scivis/real_time pts/stress-ng-1.11.0 --crypt -1 --no-rand-seed Test: Crypto pts/ncnn-1.5.0 -1 Target: CPU - Model: shufflenet-v2 pts/stress-ng-1.11.0 --fork -1 --no-rand-seed Test: Forking pts/stress-ng-1.11.0 --tree -1 --tree-method avl --no-rand-seed Test: AVL Tree pts/stress-ng-1.11.0 --cache -1 --no-rand-seed Test: CPU Cache pts/stress-ng-1.11.0 --mmap -1 --no-rand-seed Test: MMAP pts/ncnn-1.5.0 -1 Target: CPU - Model: efficientnet-b0 pts/ncnn-1.5.0 -1 Target: CPU-v3-v3 - Model: mobilenet-v3 pts/stress-ng-1.11.0 --numa -1 --no-rand-seed Test: NUMA pts/ncnn-1.5.0 -1 Target: CPU - Model: mnasnet pts/ncnn-1.5.0 -1 Target: CPU-v2-v2 - Model: mobilenet-v2 pts/ncnn-1.5.0 -1 Target: CPU - Model: FastestDet pts/deepsparse-1.5.2 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario sync Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned85-none --scenario sync Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream pts/oidn-2.0.0 -r RT.hdr_alb_nrm.3840x2160 -d cpu Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only pts/oidn-2.0.0 -r RT.ldr_alb_nrm.3840x2160 -d cpu Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only pts/ncnn-1.5.0 -1 Target: CPU - Model: squeezenet_ssd pts/ospray-2.12.0 --benchmark_filter=gravity_spheres_volume/dim_512/pathtracer/real_time Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time pts/oidn-2.0.0 -r RTLightmap.hdr.4096x4096 -d cpu Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only pts/deepsparse-1.5.2 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario sync Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario sync Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream pts/dragonflydb-1.1.0 -c 20 --ratio=1:100 Clients Per Thread: 20 - Set To Get Ratio: 1:100 pts/dragonflydb-1.1.0 -c 10 --ratio=1:100 Clients Per Thread: 10 - Set To Get Ratio: 1:100 pts/ncnn-1.5.0 -1 Target: CPU - Model: resnet18 pts/ncnn-1.5.0 -1 Target: CPU - Model: alexnet pts/deepsparse-1.5.2 zoo:nlp/sentiment_analysis/oberta-base/pytorch/huggingface/sst2/pruned90_quant-none --input_shapes='[1,128]' --scenario sync Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream pts/dragonflydb-1.1.0 -c 10 --ratio=1:10 Clients Per Thread: 10 - Set To Get Ratio: 1:10 pts/deepsparse-1.5.2 zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none --scenario sync Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream pts/ncnn-1.5.0 -1 Target: CPU - Model: vgg16 pts/ncnn-1.5.0 -1 Target: CPU - Model: resnet50 pts/dragonflydb-1.1.0 -c 20 --ratio=1:10 Clients Per Thread: 20 - Set To Get Ratio: 1:10 pts/ncnn-1.5.0 -1 Target: CPU - Model: mobilenet pts/ncnn-1.5.0 -1 Target: CPU - Model: vision_transformer pts/stress-ng-1.11.0 --sock -1 --no-rand-seed --sock-zerocopy Test: Socket Activity pts/brl-cad-1.5.0 VGR Performance Metric pts/stress-ng-1.11.0 --memfd -1 --no-rand-seed Test: MEMFD pts/ncnn-1.5.0 -1 Target: CPU - Model: googlenet pts/deepsparse-1.5.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned95_uniform_quant-none --scenario sync Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/pruned97_quant-none --input_shapes='[1,128]' --scenario sync Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream pts/specfem3d-1.0.0 layered_halfspace Model: Layered Halfspace pts/specfem3d-1.0.0 Mount_StHelens Model: Mount St. Helens pts/specfem3d-1.0.0 waterlayered_halfspace Model: Water-layered Halfspace pts/deepsparse-1.5.2 zoo:nlp/sentiment_analysis/bert-base/pytorch/huggingface/sst2/12layer_pruned90-none --scenario sync Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream pts/stress-ng-1.11.0 --futex -1 --no-rand-seed Test: Futex pts/deepsparse-1.5.2 zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/base-none --input_shapes='[1,128]' --scenario sync Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream pts/specfem3d-1.0.0 homogeneous_halfspace Model: Homogeneous Halfspace pts/svt-av1-2.10.0 --preset 13 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 13 - Input: Bosphorus 4K pts/specfem3d-1.0.0 tomographic_model Model: Tomographic Model pts/deepsparse-1.5.2 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario sync Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream pts/remhos-1.0.0 -m ./data/inline-quad.mesh -p 14 -rs 2 -rp 1 -dt 0.0005 -tf 0.6 -ho 1 -lo 2 -fct 3 Test: Sample Remap Example pts/build-linux-kernel-1.15.0 defconfig Build: defconfig pts/deepsparse-1.5.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario sync Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario sync Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream pts/liquid-dsp-1.6.0 -n 64 -b 256 -f 512 Threads: 64 - Buffer Length: 256 - Filter Length: 512 pts/laghos-1.0.0 -p 1 -m data/cube_922_hex.mesh -rs 2 -tf 0.6 -no-vis -pa Test: Sedov Blast Wave, ube_922_hex.mesh pts/stress-ng-1.11.0 --matrix-3d -1 --no-rand-seed Test: Matrix 3D Math pts/cassandra-1.2.0 WRITE Test: Writes pts/svt-av1-2.10.0 --preset 13 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 13 - Input: Bosphorus 1080p pts/nekrs-1.1.0 kershaw kershaw.par Input: Kershaw pts/ncnn-1.5.0 -1 Target: CPU - Model: yolov4-tiny pts/deepsparse-1.5.2 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --input_shapes='[1,128]' --scenario sync Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream pts/nekrs-1.1.0 turbPipePeriodic turbPipe.par Input: TurboPipe Periodic pts/stress-ng-1.11.0 --msg -1 --no-rand-seed Test: System V Message Passing pts/deepsparse-1.5.2 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario sync Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream pts/liquid-dsp-1.6.0 -n 2 -b 256 -f 32 Threads: 2 - Buffer Length: 256 - Filter Length: 32 pts/ospray-2.12.0 --benchmark_filter=particle_volume/pathtracer/real_time Benchmark: particle_volume/pathtracer/real_time pts/svt-av1-2.9.0 --preset 4 -n 160 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 4 - Input: Bosphorus 1080p pts/liquid-dsp-1.6.0 -n 2 -b 256 -f 57 Threads: 2 - Buffer Length: 256 - Filter Length: 57 pts/stress-ng-1.11.0 --schedmix -1 Test: Mixed Scheduler pts/stress-ng-1.11.0 --pthread -1 --no-rand-seed Test: Pthread pts/liquid-dsp-1.6.0 -n 4 -b 256 -f 512 Threads: 4 - Buffer Length: 256 - Filter Length: 512 pts/liquid-dsp-1.6.0 -n 1 -b 256 -f 32 Threads: 1 - Buffer Length: 256 - Filter Length: 32 pts/liquid-dsp-1.6.0 -n 8 -b 256 -f 32 Threads: 8 - Buffer Length: 256 - Filter Length: 32 pts/liquid-dsp-1.6.0 -n 32 -b 256 -f 32 Threads: 32 - Buffer Length: 256 - Filter Length: 32 pts/liquid-dsp-1.6.0 -n 8 -b 256 -f 512 Threads: 8 - Buffer Length: 256 - Filter Length: 512 pts/liquid-dsp-1.6.0 -n 16 -b 256 -f 32 Threads: 16 - Buffer Length: 256 - Filter Length: 32 pts/liquid-dsp-1.6.0 -n 16 -b 256 -f 512 Threads: 16 - Buffer Length: 256 - Filter Length: 512 pts/liquid-dsp-1.6.0 -n 4 -b 256 -f 32 Threads: 4 - Buffer Length: 256 - Filter Length: 32 pts/liquid-dsp-1.6.0 -n 1 -b 256 -f 57 Threads: 1 - Buffer Length: 256 - Filter Length: 57 pts/liquid-dsp-1.6.0 -n 1 -b 256 -f 512 Threads: 1 - Buffer Length: 256 - Filter Length: 512 pts/liquid-dsp-1.6.0 -n 64 -b 256 -f 57 Threads: 64 - Buffer Length: 256 - Filter Length: 57 pts/liquid-dsp-1.6.0 -n 2 -b 256 -f 512 Threads: 2 - Buffer Length: 256 - Filter Length: 512 pts/liquid-dsp-1.6.0 -n 32 -b 256 -f 512 Threads: 32 - Buffer Length: 256 - Filter Length: 512 pts/stress-ng-1.11.0 --mutex -1 --no-rand-seed Test: Mutex pts/liquid-dsp-1.6.0 -n 64 -b 256 -f 32 Threads: 64 - Buffer Length: 256 - Filter Length: 32 pts/svt-av1-2.10.0 --preset 4 -n 160 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 4 - Input: Bosphorus 1080p pts/stress-ng-1.11.0 --atomic -1 --no-rand-seed Test: Atomic pts/liquid-dsp-1.6.0 -n 8 -b 256 -f 57 Threads: 8 - Buffer Length: 256 - Filter Length: 57 pts/liquid-dsp-1.6.0 -n 16 -b 256 -f 57 Threads: 16 - Buffer Length: 256 - Filter Length: 57 pts/svt-av1-2.9.0 --preset 12 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 12 - Input: Bosphorus 4K pts/liquid-dsp-1.6.0 -n 4 -b 256 -f 57 Threads: 4 - Buffer Length: 256 - Filter Length: 57 pts/liquid-dsp-1.6.0 -n 32 -b 256 -f 57 Threads: 32 - Buffer Length: 256 - Filter Length: 57 pts/svt-av1-2.9.0 --preset 4 -n 160 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 4 - Input: Bosphorus 4K pts/svt-av1-2.9.0 --preset 13 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 13 - Input: Bosphorus 4K pts/svt-av1-2.10.0 --preset 4 -n 160 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 4 - Input: Bosphorus 4K pts/svt-av1-2.10.0 --preset 12 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 12 - Input: Bosphorus 4K pts/svt-av1-2.9.0 --preset 12 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 12 - Input: Bosphorus 1080p pts/svt-av1-2.10.0 --preset 12 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 12 - Input: Bosphorus 1080p pts/svt-av1-2.9.0 --preset 13 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 13 - Input: Bosphorus 1080p pts/svt-av1-2.10.0 --preset 8 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 8 - Input: Bosphorus 1080p pts/svt-av1-2.9.0 --preset 8 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 8 - Input: Bosphorus 1080p pts/vvenc-1.9.1 -i Bosphorus_3840x2160.y4m --preset fast Video Input: Bosphorus 4K - Video Preset: Fast pts/vvenc-1.9.1 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset fast Video Input: Bosphorus 1080p - Video Preset: Fast pts/stress-ng-1.11.0 --clone -1 --no-rand-seed Test: Cloning pts/vvenc-1.9.1 -i Bosphorus_3840x2160.y4m --preset faster Video Input: Bosphorus 4K - Video Preset: Faster pts/vvenc-1.9.1 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset faster Video Input: Bosphorus 1080p - Video Preset: Faster pts/svt-av1-2.10.0 --preset 8 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 8 - Input: Bosphorus 4K pts/svt-av1-2.9.0 --preset 8 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 8 - Input: Bosphorus 4K pts/kripke-1.2.0 pts/laghos-1.0.0 -p 3 -m data/box01_hex.mesh -rs 2 -tf 5.0 -vis -pa Test: Triple Point Problem pts/dragonflydb-1.1.0 -c 50 --ratio=1:100 Clients Per Thread: 50 - Set To Get Ratio: 1:100 pts/dragonflydb-1.1.0 -c 50 --ratio=1:10 Clients Per Thread: 50 - Set To Get Ratio: 1:10 pts/aom-av1-3.7.0 --cpu-used=0 --limit=20 Bosphorus_3840x2160.y4m Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K pts/aom-av1-3.7.0 --cpu-used=0 --limit=20 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p pts/aom-av1-3.7.0 --cpu-used=4 Bosphorus_3840x2160.y4m Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K pts/aom-av1-3.7.0 --cpu-used=4 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p pts/aom-av1-3.7.0 --cpu-used=6 Bosphorus_3840x2160.y4m Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K pts/aom-av1-3.7.0 --cpu-used=6 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p pts/aom-av1-3.7.0 --cpu-used=6 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K pts/aom-av1-3.7.0 --cpu-used=6 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p pts/aom-av1-3.7.0 --cpu-used=8 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K pts/aom-av1-3.7.0 --cpu-used=8 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p pts/aom-av1-3.7.0 --cpu-used=9 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K pts/aom-av1-3.7.0 --cpu-used=9 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p pts/aom-av1-3.7.0 --cpu-used=10 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K pts/aom-av1-3.7.0 --cpu-used=10 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p pts/aom-av1-3.7.0 --cpu-used=11 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 11 Realtime - Input: Bosphorus 4K pts/aom-av1-3.7.0 --cpu-used=11 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 11 Realtime - Input: Bosphorus 1080p pts/stress-ng-1.11.0 --io-uring -1 --no-rand-seed Test: IO_uring pts/dragonflydb-1.1.0 -c 60 --ratio=1:100 Clients Per Thread: 60 - Set To Get Ratio: 1:100 pts/dragonflydb-1.1.0 -c 60 --ratio=1:10 Clients Per Thread: 60 - Set To Get Ratio: 1:10 pts/build-linux-kernel-1.15.0 allmodconfig Build: allmodconfig