12600k 2023 intel alder lake

Intel Core i5-12600K testing with a ASUS PRIME Z690-P WIFI D4 (0605 BIOS) and ASUS Intel ADL-S GT1 15GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2302147-PTS-12600K2062
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
C/C++ Compiler Tests 5 Tests
CPU Massive 6 Tests
Creator Workloads 12 Tests
Database Test Suite 3 Tests
Encoding 6 Tests
Game Development 2 Tests
HPC - High Performance Computing 4 Tests
Machine Learning 3 Tests
Multi-Core 13 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 4 Tests
Programmer / Developer System Benchmarks 2 Tests
Python Tests 3 Tests
Server 3 Tests
Server CPU Tests 5 Tests
Video Encoding 5 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
February 13 2023
  4 Hours, 15 Minutes
b
February 13 2023
  4 Hours, 16 Minutes
c
February 13 2023
  4 Hours, 15 Minutes
Invert Hiding All Results Option
  4 Hours, 15 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


12600k 2023 intel alder lake Suite 1.0.0 System Test suite extracted from 12600k 2023 intel alder lake. pts/aom-av1-3.6.0 --cpu-used=0 --limit=20 Bosphorus_3840x2160.y4m Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K pts/aom-av1-3.6.0 --cpu-used=4 Bosphorus_3840x2160.y4m Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K pts/aom-av1-3.6.0 --cpu-used=6 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K pts/aom-av1-3.6.0 --cpu-used=6 Bosphorus_3840x2160.y4m Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K pts/aom-av1-3.6.0 --cpu-used=8 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K pts/aom-av1-3.6.0 --cpu-used=9 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K pts/aom-av1-3.6.0 --cpu-used=10 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K pts/aom-av1-3.6.0 --cpu-used=0 --limit=20 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p pts/aom-av1-3.6.0 --cpu-used=4 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p pts/aom-av1-3.6.0 --cpu-used=6 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p pts/aom-av1-3.6.0 --cpu-used=6 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p pts/aom-av1-3.6.0 --cpu-used=8 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p pts/aom-av1-3.6.0 --cpu-used=9 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p pts/aom-av1-3.6.0 --cpu-used=10 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p pts/blender-3.4.0 -b ../bmw27_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: BMW27 - Compute: CPU-Only pts/blender-3.4.0 -b ../classroom_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Classroom - Compute: CPU-Only pts/blender-3.4.0 -b ../fishy_cat_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Fishy Cat - Compute: CPU-Only pts/blender-3.4.0 -b ../barbershop_interior_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Barbershop - Compute: CPU-Only pts/blender-3.4.0 -b ../pavillon_barcelone_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Pabellon Barcelona - Compute: CPU-Only pts/brl-cad-1.4.0 VGR Performance Metric pts/clickhouse-1.2.0 100M Rows Hits Dataset, First Run / Cold Cache pts/clickhouse-1.2.0 100M Rows Hits Dataset, Second Run pts/clickhouse-1.2.0 100M Rows Hits Dataset, Third Run pts/cockroach-1.0.2 movr --concurrency 128 Workload: MoVR - Concurrency: 128 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 10 --concurrency 128 Workload: KV, 10% Reads - Concurrency: 128 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 50 --concurrency 128 Workload: KV, 50% Reads - Concurrency: 128 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 60 --concurrency 128 Workload: KV, 60% Reads - Concurrency: 128 pts/cockroach-1.0.2 kv --ramp 10s --read-percent 95 --concurrency 128 Workload: KV, 95% Reads - Concurrency: 128 pts/embree-1.3.0 pathtracer -c crown/crown.ecs Binary: Pathtracer - Model: Crown pts/embree-1.3.0 pathtracer_ispc -c crown/crown.ecs Binary: Pathtracer ISPC - Model: Crown pts/embree-1.3.0 pathtracer -c asian_dragon/asian_dragon.ecs Binary: Pathtracer - Model: Asian Dragon pts/embree-1.3.0 pathtracer -c asian_dragon_obj/asian_dragon.ecs Binary: Pathtracer - Model: Asian Dragon Obj pts/embree-1.3.0 pathtracer_ispc -c asian_dragon/asian_dragon.ecs Binary: Pathtracer ISPC - Model: Asian Dragon pts/embree-1.3.0 pathtracer_ispc -c asian_dragon_obj/asian_dragon.ecs Binary: Pathtracer ISPC - Model: Asian Dragon Obj pts/etlegacy-1.3.0 +set r_customwidth 1920 +set r_customheight 1080 Resolution: 1920 x 1080 pts/etlegacy-1.3.0 +set r_customwidth 1920 +set r_customheight 1200 Resolution: 1920 x 1200 pts/etlegacy-1.3.0 +set r_customwidth 2560 +set r_customheight 1440 Resolution: 2560 x 1440 pts/etlegacy-1.3.0 +set r_customwidth 3840 +set r_customheight 2160 Resolution: 3840 x 2160 pts/fluidx3d-1.1.0 FP32-FP32 Test: FP32-FP32 pts/fluidx3d-1.1.0 FP32-FP16C Test: FP32-FP16C pts/fluidx3d-1.1.0 FP32-FP16S Test: FP32-FP16S pts/gromacs-1.8.0 mpi-build water-cut1.0_GMX50_bare/1536 Implementation: MPI CPU - Input: water_GMX50_bare pts/kvazaar-1.2.0 -i Bosphorus_3840x2160.y4m --preset slow Video Input: Bosphorus 4K - Video Preset: Slow pts/kvazaar-1.2.0 -i Bosphorus_3840x2160.y4m --preset medium Video Input: Bosphorus 4K - Video Preset: Medium pts/kvazaar-1.2.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset slow Video Input: Bosphorus 1080p - Video Preset: Slow pts/kvazaar-1.2.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset medium Video Input: Bosphorus 1080p - Video Preset: Medium pts/kvazaar-1.2.0 -i Bosphorus_3840x2160.y4m --preset veryfast Video Input: Bosphorus 4K - Video Preset: Very Fast pts/kvazaar-1.2.0 -i Bosphorus_3840x2160.y4m --preset superfast Video Input: Bosphorus 4K - Video Preset: Super Fast pts/kvazaar-1.2.0 -i Bosphorus_3840x2160.y4m --preset ultrafast Video Input: Bosphorus 4K - Video Preset: Ultra Fast pts/kvazaar-1.2.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset veryfast Video Input: Bosphorus 1080p - Video Preset: Very Fast pts/kvazaar-1.2.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset superfast Video Input: Bosphorus 1080p - Video Preset: Super Fast pts/kvazaar-1.2.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset ultrafast Video Input: Bosphorus 1080p - Video Preset: Ultra Fast pts/deepsparse-1.3.2 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario async Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario sync Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:nlp/sentiment_analysis/bert-base/pytorch/huggingface/sst2/12layer_pruned90-none --scenario async Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:nlp/sentiment_analysis/bert-base/pytorch/huggingface/sst2/12layer_pruned90-none --scenario sync Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario async Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario sync Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario async Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario sync Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario async Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario sync Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario async Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario sync Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none --scenario async Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none --scenario sync Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --scenario async Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --scenario sync Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream pts/deepsparse-1.3.2 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario async Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.3.2 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario sync Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream pts/onednn-3.0.0 --ip --batch=inputs/ip/shapes_1d --cfg=f32 --engine=cpu Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-3.0.0 --ip --batch=inputs/ip/shapes_3d --cfg=f32 --engine=cpu Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-3.0.0 --ip --batch=inputs/ip/shapes_1d --cfg=u8s8f32 --engine=cpu Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.0 --ip --batch=inputs/ip/shapes_3d --cfg=u8s8f32 --engine=cpu Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.0 --ip --batch=inputs/ip/shapes_1d --cfg=bf16bf16bf16 --engine=cpu Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.0 --ip --batch=inputs/ip/shapes_3d --cfg=bf16bf16bf16 --engine=cpu Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.0 --conv --batch=inputs/conv/shapes_auto --cfg=f32 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-3.0.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=f32 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-3.0.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=f32 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-3.0.0 --conv --batch=inputs/conv/shapes_auto --cfg=u8s8f32 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=u8s8f32 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=u8s8f32 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=f32 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-3.0.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=f32 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-3.0.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=u8s8f32 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.0 --conv --batch=inputs/conv/shapes_auto --cfg=bf16bf16bf16 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=bf16bf16bf16 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=bf16bf16bf16 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=u8s8f32 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.0 --matmul --batch=inputs/matmul/shapes_transformer --cfg=f32 --engine=cpu Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU pts/onednn-3.0.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=bf16bf16bf16 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=bf16bf16bf16 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.0.0 --matmul --batch=inputs/matmul/shapes_transformer --cfg=u8s8f32 --engine=cpu Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.0.0 --matmul --batch=inputs/matmul/shapes_transformer --cfg=bf16bf16bf16 --engine=cpu Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU pts/openvino-1.2.0 -m models/intel/face-detection-0206/FP16/face-detection-0206.xml -d CPU Model: Face Detection FP16 - Device: CPU pts/openvino-1.2.0 -m models/intel/person-detection-0106/FP16/person-detection-0106.xml -d CPU Model: Person Detection FP16 - Device: CPU pts/openvino-1.2.0 -m models/intel/person-detection-0106/FP32/person-detection-0106.xml -d CPU Model: Person Detection FP32 - Device: CPU pts/openvino-1.2.0 -m models/intel/vehicle-detection-0202/FP16/vehicle-detection-0202.xml -d CPU Model: Vehicle Detection FP16 - Device: CPU pts/openvino-1.2.0 -m models/intel/face-detection-0206/FP16-INT8/face-detection-0206.xml -d CPU Model: Face Detection FP16-INT8 - Device: CPU pts/openvino-1.2.0 -m models/intel/vehicle-detection-0202/FP16-INT8/vehicle-detection-0202.xml -d CPU Model: Vehicle Detection FP16-INT8 - Device: CPU pts/openvino-1.2.0 -m models/intel/weld-porosity-detection-0001/FP16/weld-porosity-detection-0001.xml -d CPU Model: Weld Porosity Detection FP16 - Device: CPU pts/openvino-1.2.0 -m models/intel/machine-translation-nar-en-de-0002/FP16/machine-translation-nar-en-de-0002.xml -d CPU Model: Machine Translation EN To DE FP16 - Device: CPU pts/openvino-1.2.0 -m models/intel/weld-porosity-detection-0001/FP16-INT8/weld-porosity-detection-0001.xml -d CPU Model: Weld Porosity Detection FP16-INT8 - Device: CPU pts/openvino-1.2.0 -m models/intel/person-vehicle-bike-detection-2004/FP16/person-vehicle-bike-detection-2004.xml -d CPU Model: Person Vehicle Bike Detection FP16 - Device: CPU pts/openvino-1.2.0 -m models/intel/age-gender-recognition-retail-0013/FP16/age-gender-recognition-retail-0013.xml -d CPU Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU pts/openvino-1.2.0 -m models/intel/age-gender-recognition-retail-0013/FP16-INT8/age-gender-recognition-retail-0013.xml -d CPU Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU pts/openvkl-1.3.0 vklBenchmark --benchmark_filter=ispc Benchmark: vklBenchmark ISPC pts/openvkl-1.3.0 vklBenchmark --benchmark_filter=scalar Benchmark: vklBenchmark Scalar pts/rocksdb-1.4.0 --benchmarks="fillrandom" Test: Random Fill pts/rocksdb-1.4.0 --benchmarks="readrandom" Test: Random Read pts/rocksdb-1.4.0 --benchmarks="updaterandom" Test: Update Random pts/rocksdb-1.4.0 --benchmarks="fillseq" Test: Sequential Fill pts/rocksdb-1.4.0 --benchmarks="fillsync" Test: Random Fill Sync pts/rocksdb-1.4.0 --benchmarks="readwhilewriting" Test: Read While Writing pts/rocksdb-1.4.0 --benchmarks="readrandomwriterandom" Test: Read Random Write Random pts/stargate-1.1.0 44100 512 Sample Rate: 44100 - Buffer Size: 512 pts/stargate-1.1.0 96000 512 Sample Rate: 96000 - Buffer Size: 512 pts/stargate-1.1.0 192000 512 Sample Rate: 192000 - Buffer Size: 512 pts/stargate-1.1.0 44100 1024 Sample Rate: 44100 - Buffer Size: 1024 pts/stargate-1.1.0 48000 512 Sample Rate: 480000 - Buffer Size: 512 pts/stargate-1.1.0 96000 1024 Sample Rate: 96000 - Buffer Size: 1024 pts/stargate-1.1.0 192000 1024 Sample Rate: 192000 - Buffer Size: 1024 pts/stargate-1.1.0 48000 1024 Sample Rate: 480000 - Buffer Size: 1024 pts/svt-av1-2.7.0 --preset 4 -n 160 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 4 - Input: Bosphorus 4K pts/svt-av1-2.7.0 --preset 8 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 8 - Input: Bosphorus 4K pts/svt-av1-2.7.0 --preset 12 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 12 - Input: Bosphorus 4K pts/svt-av1-2.7.0 --preset 13 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 13 - Input: Bosphorus 4K pts/svt-av1-2.7.0 --preset 4 -n 160 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 4 - Input: Bosphorus 1080p pts/svt-av1-2.7.0 --preset 8 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 8 - Input: Bosphorus 1080p pts/svt-av1-2.7.0 --preset 12 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 12 - Input: Bosphorus 1080p pts/svt-av1-2.7.0 --preset 13 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 13 - Input: Bosphorus 1080p pts/build-linux-kernel-1.15.0 defconfig Build: defconfig pts/build-linux-kernel-1.15.0 allmodconfig Build: allmodconfig pts/uvg266-1.0.0 -i Bosphorus_3840x2160.y4m --preset slow Video Input: Bosphorus 4K - Video Preset: Slow pts/uvg266-1.0.0 -i Bosphorus_3840x2160.y4m --preset medium Video Input: Bosphorus 4K - Video Preset: Medium pts/uvg266-1.0.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset slow Video Input: Bosphorus 1080p - Video Preset: Slow pts/uvg266-1.0.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset medium Video Input: Bosphorus 1080p - Video Preset: Medium pts/uvg266-1.0.0 -i Bosphorus_3840x2160.y4m --preset veryfast Video Input: Bosphorus 4K - Video Preset: Very Fast pts/uvg266-1.0.0 -i Bosphorus_3840x2160.y4m --preset superfast Video Input: Bosphorus 4K - Video Preset: Super Fast pts/uvg266-1.0.0 -i Bosphorus_3840x2160.y4m --preset ultrafast Video Input: Bosphorus 4K - Video Preset: Ultra Fast pts/uvg266-1.0.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset veryfast Video Input: Bosphorus 1080p - Video Preset: Very Fast pts/uvg266-1.0.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset superfast Video Input: Bosphorus 1080p - Video Preset: Super Fast pts/uvg266-1.0.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset ultrafast Video Input: Bosphorus 1080p - Video Preset: Ultra Fast pts/vvenc-1.0.0 -i Bosphorus_3840x2160.y4m --preset fast Video Input: Bosphorus 4K - Video Preset: Fast pts/vvenc-1.0.0 -i Bosphorus_3840x2160.y4m --preset faster Video Input: Bosphorus 4K - Video Preset: Faster pts/vvenc-1.0.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset fast Video Input: Bosphorus 1080p - Video Preset: Fast pts/vvenc-1.0.0 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset faster Video Input: Bosphorus 1080p - Video Preset: Faster pts/compress-zstd-1.6.0 -b3 Compression Level: 3 - Compression Speed pts/compress-zstd-1.6.0 -b3 Compression Level: 3 - Decompression Speed pts/compress-zstd-1.6.0 -b8 Compression Level: 8 - Compression Speed pts/compress-zstd-1.6.0 -b8 Compression Level: 8 - Decompression Speed pts/compress-zstd-1.6.0 -b12 Compression Level: 12 - Compression Speed pts/compress-zstd-1.6.0 -b12 Compression Level: 12 - Decompression Speed pts/compress-zstd-1.6.0 -b19 Compression Level: 19 - Compression Speed pts/compress-zstd-1.6.0 -b19 Compression Level: 19 - Decompression Speed pts/compress-zstd-1.6.0 -b3 --long Compression Level: 3, Long Mode - Compression Speed pts/compress-zstd-1.6.0 -b3 --long Compression Level: 3, Long Mode - Decompression Speed pts/compress-zstd-1.6.0 -b8 --long Compression Level: 8, Long Mode - Compression Speed pts/compress-zstd-1.6.0 -b8 --long Compression Level: 8, Long Mode - Decompression Speed pts/compress-zstd-1.6.0 -b19 --long Compression Level: 19, Long Mode - Compression Speed pts/compress-zstd-1.6.0 -b19 --long Compression Level: 19, Long Mode - Decompression Speed