oktoberfest

Tests for a future article. Intel Core i5-12600K testing with a ASUS PRIME Z690-P WIFI D4 (0605 BIOS) and ASUS Intel ADL-S GT1 15GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2310296-PTS-OKTOBERF32
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 4 Tests
C++ Boost Tests 2 Tests
Timed Code Compilation 5 Tests
C/C++ Compiler Tests 6 Tests
CPU Massive 12 Tests
Creator Workloads 16 Tests
Database Test Suite 3 Tests
Encoding 6 Tests
Game Development 4 Tests
HPC - High Performance Computing 9 Tests
Common Kernel Benchmarks 2 Tests
Machine Learning 5 Tests
MPI Benchmarks 2 Tests
Multi-Core 19 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 6 Tests
OpenMPI Tests 5 Tests
Programmer / Developer System Benchmarks 5 Tests
Python Tests 7 Tests
Raytracing 2 Tests
Renderers 3 Tests
Server 6 Tests
Server CPU Tests 7 Tests
Single-Threaded 2 Tests
Video Encoding 5 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
October 29 2023
  7 Hours, 46 Minutes
b
October 29 2023
  7 Hours, 30 Minutes
Invert Hiding All Results Option
  7 Hours, 38 Minutes
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


oktoberfest Suite 1.0.0 System Test suite extracted from oktoberfest. pts/aom-av1-3.7.0 --cpu-used=0 --limit=20 Bosphorus_3840x2160.y4m Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4K pts/aom-av1-3.7.0 --cpu-used=4 Bosphorus_3840x2160.y4m Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4K pts/aom-av1-3.7.0 --cpu-used=6 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4K pts/aom-av1-3.7.0 --cpu-used=6 Bosphorus_3840x2160.y4m Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4K pts/aom-av1-3.7.0 --cpu-used=8 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4K pts/aom-av1-3.7.0 --cpu-used=9 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4K pts/aom-av1-3.7.0 --cpu-used=10 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4K pts/aom-av1-3.7.0 --cpu-used=11 --rt Bosphorus_3840x2160.y4m Encoder Mode: Speed 11 Realtime - Input: Bosphorus 4K pts/aom-av1-3.7.0 --cpu-used=0 --limit=20 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080p pts/aom-av1-3.7.0 --cpu-used=4 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080p pts/aom-av1-3.7.0 --cpu-used=6 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080p pts/aom-av1-3.7.0 --cpu-used=6 Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080p pts/aom-av1-3.7.0 --cpu-used=8 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080p pts/aom-av1-3.7.0 --cpu-used=9 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080p pts/aom-av1-3.7.0 --cpu-used=10 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080p pts/aom-av1-3.7.0 --cpu-used=11 --rt Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m Encoder Mode: Speed 11 Realtime - Input: Bosphorus 1080p pts/cassandra-1.2.0 WRITE Test: Writes pts/apache-3.0.0 -c 100 Concurrent Requests: 100 pts/apache-3.0.0 -c 200 Concurrent Requests: 200 pts/apache-3.0.0 -c 500 Concurrent Requests: 500 pts/apache-3.0.0 -c 1000 Concurrent Requests: 1000 pts/blender-3.6.0 -b ../bmw27_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: BMW27 - Compute: CPU-Only pts/blender-3.6.0 -b ../classroom_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Classroom - Compute: CPU-Only pts/blender-3.6.0 -b ../fishy_cat_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Fishy Cat - Compute: CPU-Only pts/blender-3.6.0 -b ../barbershop_interior_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Barbershop - Compute: CPU-Only pts/blender-3.6.0 -b ../pavillon_barcelone_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Pabellon Barcelona - Compute: CPU-Only pts/brl-cad-1.5.0 VGR Performance Metric pts/build2-1.2.0 Time To Compile pts/dav1d-1.14.0 -i chimera_8b_1080p.ivf Video Input: Chimera 1080p pts/dav1d-1.14.0 -i summer_nature_4k.ivf Video Input: Summer Nature 4K pts/dav1d-1.14.0 -i summer_nature_1080p.ivf Video Input: Summer Nature 1080p pts/dav1d-1.14.0 -i chimera_10b_1080p.ivf Video Input: Chimera 1080p 10-bit pts/duckdb-1.0.0 benchmark/imdb Benchmark: IMDB pts/duckdb-1.0.0 benchmark/tpch/parquet Benchmark: TPC-H Parquet pts/easywave-1.0.0 -grid examples/e2Asean.grd -source examples/BengkuluSept2007.flt -time 240 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 pts/easywave-1.0.0 -grid examples/e2Asean.grd -source examples/BengkuluSept2007.flt -time 1200 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 pts/embree-1.6.0 pathtracer -c crown/crown.ecs Binary: Pathtracer - Model: Crown pts/embree-1.6.0 pathtracer_ispc -c crown/crown.ecs Binary: Pathtracer ISPC - Model: Crown pts/embree-1.6.0 pathtracer -c asian_dragon/asian_dragon.ecs Binary: Pathtracer - Model: Asian Dragon pts/embree-1.6.0 pathtracer -c asian_dragon_obj/asian_dragon.ecs Binary: Pathtracer - Model: Asian Dragon Obj pts/embree-1.6.0 pathtracer_ispc -c asian_dragon/asian_dragon.ecs Binary: Pathtracer ISPC - Model: Asian Dragon pts/embree-1.6.0 pathtracer_ispc -c asian_dragon_obj/asian_dragon.ecs Binary: Pathtracer ISPC - Model: Asian Dragon Obj pts/espeak-1.7.0 Text-To-Speech Synthesis pts/hpcg-1.3.0 --nx=104 --ny=104 --nz=104 --rt=60 X Y Z: 104 104 104 - RT: 60 pts/hpcg-1.3.0 --nx=144 --ny=144 --nz=144 --rt=60 X Y Z: 144 144 144 - RT: 60 pts/oidn-2.1.0 -r RT.ldr_alb_nrm.3840x2160 -d cpu Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only pts/avifenc-1.4.0 -s 0 Encoder Speed: 0 pts/avifenc-1.4.0 -s 2 Encoder Speed: 2 pts/avifenc-1.4.0 -s 6 Encoder Speed: 6 pts/avifenc-1.4.0 -s 6 -l Encoder Speed: 6, Lossless pts/avifenc-1.4.0 -s 10 -l Encoder Speed: 10, Lossless pts/libxsmm-1.0.1 128 128 128 M N K: 128 pts/libxsmm-1.0.1 32 32 32 M N K: 32 pts/libxsmm-1.0.1 64 64 64 M N K: 64 pts/liquid-dsp-1.6.0 -n 1 -b 256 -f 32 Threads: 1 - Buffer Length: 256 - Filter Length: 32 pts/liquid-dsp-1.6.0 -n 1 -b 256 -f 57 Threads: 1 - Buffer Length: 256 - Filter Length: 57 pts/liquid-dsp-1.6.0 -n 2 -b 256 -f 32 Threads: 2 - Buffer Length: 256 - Filter Length: 32 pts/liquid-dsp-1.6.0 -n 2 -b 256 -f 57 Threads: 2 - Buffer Length: 256 - Filter Length: 57 pts/liquid-dsp-1.6.0 -n 4 -b 256 -f 32 Threads: 4 - Buffer Length: 256 - Filter Length: 32 pts/liquid-dsp-1.6.0 -n 4 -b 256 -f 57 Threads: 4 - Buffer Length: 256 - Filter Length: 57 pts/liquid-dsp-1.6.0 -n 8 -b 256 -f 32 Threads: 8 - Buffer Length: 256 - Filter Length: 32 pts/liquid-dsp-1.6.0 -n 8 -b 256 -f 57 Threads: 8 - Buffer Length: 256 - Filter Length: 57 pts/liquid-dsp-1.6.0 -n 1 -b 256 -f 512 Threads: 1 - Buffer Length: 256 - Filter Length: 512 pts/liquid-dsp-1.6.0 -n 16 -b 256 -f 32 Threads: 16 - Buffer Length: 256 - Filter Length: 32 pts/liquid-dsp-1.6.0 -n 16 -b 256 -f 57 Threads: 16 - Buffer Length: 256 - Filter Length: 57 pts/liquid-dsp-1.6.0 -n 2 -b 256 -f 512 Threads: 2 - Buffer Length: 256 - Filter Length: 512 pts/liquid-dsp-1.6.0 -n 4 -b 256 -f 512 Threads: 4 - Buffer Length: 256 - Filter Length: 512 pts/liquid-dsp-1.6.0 -n 8 -b 256 -f 512 Threads: 8 - Buffer Length: 256 - Filter Length: 512 pts/liquid-dsp-1.6.0 -n 16 -b 256 -f 512 Threads: 16 - Buffer Length: 256 - Filter Length: 512 pts/memcached-1.2.0 --ratio=1:10 Set To Get Ratio: 1:10 pts/memcached-1.2.0 --ratio=1:100 Set To Get Ratio: 1:100 pts/ncnn-1.5.0 -1 Target: CPU - Model: mobilenet pts/ncnn-1.5.0 -1 Target: CPU-v2-v2 - Model: mobilenet-v2 pts/ncnn-1.5.0 -1 Target: CPU-v3-v3 - Model: mobilenet-v3 pts/ncnn-1.5.0 -1 Target: CPU - Model: shufflenet-v2 pts/ncnn-1.5.0 -1 Target: CPU - Model: mnasnet pts/ncnn-1.5.0 -1 Target: CPU - Model: efficientnet-b0 pts/ncnn-1.5.0 -1 Target: CPU - Model: blazeface pts/ncnn-1.5.0 -1 Target: CPU - Model: googlenet pts/ncnn-1.5.0 -1 Target: CPU - Model: vgg16 pts/ncnn-1.5.0 -1 Target: CPU - Model: resnet18 pts/ncnn-1.5.0 -1 Target: CPU - Model: alexnet pts/ncnn-1.5.0 -1 Target: CPU - Model: resnet50 pts/ncnn-1.5.0 -1 Target: CPU - Model: yolov4-tiny pts/ncnn-1.5.0 -1 Target: CPU - Model: squeezenet_ssd pts/ncnn-1.5.0 -1 Target: CPU - Model: regnety_400m pts/ncnn-1.5.0 -1 Target: CPU - Model: vision_transformer pts/ncnn-1.5.0 -1 Target: CPU - Model: FastestDet pts/nekrs-1.1.0 kershaw kershaw.par Input: Kershaw pts/nekrs-1.1.0 turbPipePeriodic turbPipe.par Input: TurboPipe Periodic pts/deepsparse-1.5.2 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario async Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario sync Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:nlp/sentiment_analysis/oberta-base/pytorch/huggingface/sst2/pruned90_quant-none --input_shapes='[1,128]' --scenario async Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/sentiment_analysis/oberta-base/pytorch/huggingface/sst2/pruned90_quant-none --input_shapes='[1,128]' --scenario sync Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:nlp/sentiment_analysis/bert-base/pytorch/huggingface/sst2/12layer_pruned90-none --scenario async Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/sentiment_analysis/bert-base/pytorch/huggingface/sst2/12layer_pruned90-none --scenario sync Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario async Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario sync Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario async Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario sync Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned95_uniform_quant-none --scenario async Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned95_uniform_quant-none --scenario sync Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario async Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario sync Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/base-none --input_shapes='[1,128]' --scenario async Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/base-none --input_shapes='[1,128]' --scenario sync Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario async Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario sync Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned85-none --scenario async Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned85-none --scenario sync Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario async Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario sync Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none --scenario async Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none --scenario sync Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/pruned97_quant-none --input_shapes='[1,128]' --scenario async Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/pruned97_quant-none --input_shapes='[1,128]' --scenario sync Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --input_shapes='[1,128]' --scenario async Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --input_shapes='[1,128]' --scenario sync Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario async Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario sync Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream pts/nginx-3.0.1 -c 100 Connections: 100 pts/nginx-3.0.1 -c 200 Connections: 200 pts/nginx-3.0.1 -c 500 Connections: 500 pts/nginx-3.0.1 -c 1000 Connections: 1000 pts/onednn-3.3.0 --ip --batch=inputs/ip/shapes_1d --cfg=f32 --engine=cpu Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-3.3.0 --ip --batch=inputs/ip/shapes_3d --cfg=f32 --engine=cpu Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-3.3.0 --conv --batch=inputs/conv/shapes_auto --cfg=f32 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-3.3.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=f32 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-3.3.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=f32 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-3.3.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=f32 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-3.3.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=f32 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/openradioss-1.1.1 Bumper_Beam_AP_meshed_0000.rad Bumper_Beam_AP_meshed_0001.rad Model: Bumper Beam pts/openradioss-1.1.1 Cell_Phone_Drop_0000.rad Cell_Phone_Drop_0001.rad Model: Cell Phone Drop Test pts/openradioss-1.1.1 BIRD_WINDSHIELD_v1_0000.rad BIRD_WINDSHIELD_v1_0001.rad Model: Bird Strike on Windshield pts/openradioss-1.1.1 RUBBER_SEAL_IMPDISP_GEOM_0000.rad RUBBER_SEAL_IMPDISP_GEOM_0001.rad Model: Rubber O-Ring Seal Installation pts/openradioss-1.1.1 fsi_drop_container_0000.rad fsi_drop_container_0001.rad Model: INIVOL and Fluid Structure Interaction Drop Container pts/openvkl-2.0.0 vklBenchmarkCPU --benchmark_filter=ispc Benchmark: vklBenchmarkCPU ISPC pts/openvkl-2.0.0 vklBenchmarkCPU --benchmark_filter=scalar Benchmark: vklBenchmarkCPU Scalar pts/encode-opus-1.4.0 WAV To Opus Encode pts/ospray-2.12.0 --benchmark_filter=particle_volume/ao/real_time Benchmark: particle_volume/ao/real_time pts/ospray-2.12.0 --benchmark_filter=particle_volume/scivis/real_time Benchmark: particle_volume/scivis/real_time pts/ospray-2.12.0 --benchmark_filter=particle_volume/pathtracer/real_time Benchmark: particle_volume/pathtracer/real_time pts/ospray-2.12.0 --benchmark_filter=gravity_spheres_volume/dim_512/ao/real_time Benchmark: gravity_spheres_volume/dim_512/ao/real_time pts/ospray-2.12.0 --benchmark_filter=gravity_spheres_volume/dim_512/scivis/real_time Benchmark: gravity_spheres_volume/dim_512/scivis/real_time pts/ospray-2.12.0 --benchmark_filter=gravity_spheres_volume/dim_512/pathtracer/real_time Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time pts/ospray-studio-1.2.0 --cameras 1 1 --resolution 3840 2160 --spp 1 --renderer pathtracer Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 2 2 --resolution 3840 2160 --spp 1 --renderer pathtracer Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 3 3 --resolution 3840 2160 --spp 1 --renderer pathtracer Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 1 1 --resolution 3840 2160 --spp 16 --renderer pathtracer Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 1 1 --resolution 3840 2160 --spp 32 --renderer pathtracer Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 2 2 --resolution 3840 2160 --spp 16 --renderer pathtracer Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 2 2 --resolution 3840 2160 --spp 32 --renderer pathtracer Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 3 3 --resolution 3840 2160 --spp 16 --renderer pathtracer Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 3 3 --resolution 3840 2160 --spp 32 --renderer pathtracer Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 1 1 --resolution 1920 1080 --spp 1 --renderer pathtracer Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 2 2 --resolution 1920 1080 --spp 1 --renderer pathtracer Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 3 3 --resolution 1920 1080 --spp 1 --renderer pathtracer Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 1 1 --resolution 1920 1080 --spp 16 --renderer pathtracer Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 1 1 --resolution 1920 1080 --spp 32 --renderer pathtracer Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 2 2 --resolution 1920 1080 --spp 16 --renderer pathtracer Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 2 2 --resolution 1920 1080 --spp 32 --renderer pathtracer Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 3 3 --resolution 1920 1080 --spp 16 --renderer pathtracer Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 3 3 --resolution 1920 1080 --spp 32 --renderer pathtracer Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/palabos-1.0.0 100 Grid Size: 100 pts/palabos-1.0.0 400 Grid Size: 400 pts/qmcpack-1.7.0 tests/molecules/H4_ae optm-linear-linemin.xml Input: H4_ae pts/qmcpack-1.7.0 tests/molecules/Li2_STO_ae Li2.STO.long.in.xml Input: Li2_STO_ae pts/qmcpack-1.7.0 tests/molecules/LiH_ae_MSD vmc_long_opt_CI.in.xml Input: LiH_ae_MSD pts/qmcpack-1.7.0 build/examples/molecules/H2O/example_H2O-1-1 simple-H2O.xml Input: simple-H2O pts/qmcpack-1.7.0 tests/molecules/O_ae_pyscf_UHF vmc_long_noj.in.xml Input: O_ae_pyscf_UHF pts/qmcpack-1.7.0 tests/molecules/FeCO6_b3lyp_gms vmc_long_noj.in.xml Input: FeCO6_b3lyp_gms pts/quantlib-1.2.0 --mp Configuration: Multi-Threaded pts/quantlib-1.2.0 Configuration: Single-Threaded pts/sqlite-2.2.0 1 Threads / Copies: 1 pts/sqlite-2.2.0 2 Threads / Copies: 2 pts/sqlite-2.2.0 4 Threads / Copies: 4 pts/stress-ng-1.11.0 --hash -1 --no-rand-seed Test: Hash pts/stress-ng-1.11.0 --pipe -1 --no-rand-seed Test: Pipe pts/stress-ng-1.11.0 --poll -1 --no-rand-seed Test: Poll pts/stress-ng-1.11.0 --zlib -1 --no-rand-seed Test: Zlib pts/stress-ng-1.11.0 --clone -1 --no-rand-seed Test: Cloning pts/stress-ng-1.11.0 --pthread -1 --no-rand-seed Test: Pthread pts/stress-ng-1.11.0 --tree -1 --tree-method avl --no-rand-seed Test: AVL Tree pts/stress-ng-1.11.0 --vnni -1 Test: AVX-512 VNNI pts/stress-ng-1.11.0 --fp -1 --no-rand-seed Test: Floating Point pts/stress-ng-1.11.0 --matrix-3d -1 --no-rand-seed Test: Matrix 3D Math pts/stress-ng-1.11.0 --vecshuf -1 --no-rand-seed Test: Vector Shuffle pts/stress-ng-1.11.0 --schedmix -1 Test: Mixed Scheduler pts/stress-ng-1.11.0 --vecwide -1 --no-rand-seed Test: Wide Vector Math pts/stress-ng-1.11.0 --fma -1 --no-rand-seed Test: Fused Multiply-Add pts/stress-ng-1.11.0 --vecfp -1 --no-rand-seed Test: Vector Floating Point pts/svt-av1-2.10.0 --preset 4 -n 160 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 4 - Input: Bosphorus 4K pts/svt-av1-2.10.0 --preset 8 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 8 - Input: Bosphorus 4K pts/svt-av1-2.10.0 --preset 12 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 12 - Input: Bosphorus 4K pts/svt-av1-2.10.0 --preset 13 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 13 - Input: Bosphorus 4K pts/svt-av1-2.10.0 --preset 4 -n 160 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 4 - Input: Bosphorus 1080p pts/svt-av1-2.10.0 --preset 8 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 8 - Input: Bosphorus 1080p pts/svt-av1-2.10.0 --preset 12 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 12 - Input: Bosphorus 1080p pts/svt-av1-2.10.0 --preset 13 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 13 - Input: Bosphorus 1080p pts/tensorflow-2.1.0 --device cpu --batch_size=16 --model=resnet50 Device: CPU - Batch Size: 16 - Model: ResNet-50 pts/tensorflow-2.1.0 --device cpu --batch_size=32 --model=resnet50 Device: CPU - Batch Size: 32 - Model: ResNet-50 pts/tensorflow-2.1.0 --device cpu --batch_size=64 --model=resnet50 Device: CPU - Batch Size: 64 - Model: ResNet-50 pts/build-gcc-1.4.0 Time To Compile pts/build-godot-4.0.0 Time To Compile pts/build-llvm-1.5.0 Ninja Build System: Ninja pts/build-llvm-1.5.0 Build System: Unix Makefiles pts/build-nodejs-1.3.0 Time To Compile pts/vvenc-1.9.1 -i Bosphorus_3840x2160.y4m --preset fast Video Input: Bosphorus 4K - Video Preset: Fast pts/vvenc-1.9.1 -i Bosphorus_3840x2160.y4m --preset faster Video Input: Bosphorus 4K - Video Preset: Faster pts/vvenc-1.9.1 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset fast Video Input: Bosphorus 1080p - Video Preset: Fast pts/vvenc-1.9.1 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset faster Video Input: Bosphorus 1080p - Video Preset: Faster pts/whisper-cpp-1.0.0 -m models/ggml-base.en.bin -f ../2016-state-of-the-union.wav Model: ggml-base.en - Input: 2016 State of the Union pts/whisper-cpp-1.0.0 -m models/ggml-small.en.bin -f ../2016-state-of-the-union.wav Model: ggml-small.en - Input: 2016 State of the Union pts/whisper-cpp-1.0.0 -m models/ggml-medium.en.bin -f ../2016-state-of-the-union.wav Model: ggml-medium.en - Input: 2016 State of the Union pts/z3-1.0.0 1.smt2 SMT File: 1.smt2 pts/z3-1.0.0 2.smt2 SMT File: 2.smt2