tgls

Intel Core i7-1185G7 testing with a Dell 0DXP1F (3.7.0 BIOS) and Intel Xe TGL GT2 15GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2311135-PTS-TGLS547809
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 2 Tests
C++ Boost Tests 3 Tests
Timed Code Compilation 3 Tests
C/C++ Compiler Tests 2 Tests
CPU Massive 8 Tests
Creator Workloads 11 Tests
Encoding 4 Tests
Game Development 2 Tests
HPC - High Performance Computing 7 Tests
Java Tests 3 Tests
Machine Learning 4 Tests
Multi-Core 15 Tests
Intel oneAPI 6 Tests
OpenMPI Tests 3 Tests
Programmer / Developer System Benchmarks 4 Tests
Python Tests 5 Tests
Scientific Computing 2 Tests
Server CPU Tests 6 Tests
Video Encoding 4 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
November 12 2023
  8 Hours, 30 Minutes
b
November 12 2023
  21 Hours, 56 Minutes
c
November 13 2023
  21 Hours, 46 Minutes
Invert Hiding All Results Option
  17 Hours, 24 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


tgls Suite 1.0.0 System Test suite extracted from tgls. pts/ncnn-1.5.0 Target: Vulkan GPU - Model: blazeface pts/ncnn-1.5.0 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 pts/ncnn-1.5.0 -1 Target: CPU - Model: alexnet pts/ncnn-1.5.0 Target: Vulkan GPU - Model: alexnet pts/ncnn-1.5.0 Target: Vulkan GPU - Model: efficientnet-b0 pts/ncnn-1.5.0 -1 Target: CPU-v2-v2 - Model: mobilenet-v2 pts/ncnn-1.5.0 -1 Target: CPU - Model: shufflenet-v2 pts/ncnn-1.5.0 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 pts/ncnn-1.5.0 -1 Target: CPU - Model: mnasnet pts/ncnn-1.5.0 Target: Vulkan GPU - Model: mnasnet pts/ncnn-1.5.0 Target: Vulkan GPU - Model: shufflenet-v2 pts/ncnn-1.5.0 Target: Vulkan GPU - Model: regnety_400m pts/ncnn-1.5.0 Target: Vulkan GPU - Model: googlenet pts/ncnn-1.5.0 Target: Vulkan GPU - Model: resnet18 pts/ncnn-1.5.0 -1 Target: CPU-v3-v3 - Model: mobilenet-v3 pts/ncnn-1.5.0 -1 Target: CPU - Model: resnet18 pts/ncnn-1.5.0 -1 Target: CPU - Model: resnet50 pts/ncnn-1.5.0 Target: Vulkan GPU - Model: squeezenet_ssd pts/ncnn-1.5.0 Target: Vulkan GPU - Model: mobilenet pts/ncnn-1.5.0 Target: Vulkan GPU - Model: resnet50 pts/ncnn-1.5.0 -1 Target: CPU - Model: squeezenet_ssd pts/ncnn-1.5.0 -1 Target: CPU - Model: mobilenet pts/ncnn-1.5.0 -1 Target: CPU - Model: googlenet pts/ncnn-1.5.0 -1 Target: CPU - Model: blazeface pts/ncnn-1.5.0 -1 Target: CPU - Model: regnety_400m pts/ncnn-1.5.0 -1 Target: CPU - Model: yolov4-tiny pts/ncnn-1.5.0 Target: Vulkan GPU - Model: FastestDet pts/ncnn-1.5.0 Target: Vulkan GPU - Model: yolov4-tiny pts/ncnn-1.5.0 -1 Target: CPU - Model: FastestDet pts/ncnn-1.5.0 -1 Target: CPU - Model: efficientnet-b0 pts/ncnn-1.5.0 Target: Vulkan GPU - Model: vision_transformer pts/ncnn-1.5.0 Target: Vulkan GPU - Model: vgg16 pts/ncnn-1.5.0 -1 Target: CPU - Model: vgg16 pts/ncnn-1.5.0 -1 Target: CPU - Model: vision_transformer pts/stress-ng-1.11.0 --mmap -1 --no-rand-seed Test: MMAP pts/stress-ng-1.11.0 --sock -1 --no-rand-seed --sock-zerocopy Test: Socket Activity pts/dacapobench-1.1.0 tradebeans Java Test: Tradebeans pts/stress-ng-1.11.0 --cache -1 --no-rand-seed Test: CPU Cache pts/stress-ng-1.11.0 --vecfp -1 --no-rand-seed Test: Vector Floating Point pts/stress-ng-1.11.0 --msg -1 --no-rand-seed Test: System V Message Passing pts/stress-ng-1.11.0 --numa -1 --no-rand-seed Test: NUMA pts/stress-ng-1.11.0 --str -1 --no-rand-seed Test: Glibc C String Functions pts/cpuminer-opt-1.7.0 -a skein Algorithm: Skeincoin pts/stress-ng-1.11.0 --fork -1 --no-rand-seed Test: Forking pts/qmcpack-1.7.0 tests/molecules/FeCO6_b3lyp_gms vmc_long_noj.in.xml Input: FeCO6_b3lyp_gms pts/ospray-studio-1.2.0 --cameras 1 1 --resolution 3840 2160 --spp 1 --renderer pathtracer Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/stress-ng-1.11.0 --sem -1 --no-rand-seed Test: Semaphores pts/stress-ng-1.11.0 --sendfile -1 --no-rand-seed Test: SENDFILE pts/deepsparse-1.5.2 zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/pruned97_quant-none --input_shapes='[1,128]' --scenario async Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/stress-ng-1.11.0 --malloc -1 --no-rand-seed Test: Malloc pts/cpuminer-opt-1.7.0 -a deep Algorithm: Deepcoin pts/cpuminer-opt-1.7.0 -a blake2s Algorithm: Blake-2 S pts/stress-ng-1.11.0 --mutex -1 --no-rand-seed Test: Mutex pts/stress-ng-1.11.0 --memcpy -1 --no-rand-seed Test: Memory Copying pts/stress-ng-1.11.0 --vecmath -1 --no-rand-seed Test: Vector Math pts/cpuminer-opt-1.7.0 -a myr-gr Algorithm: Myriad-Groestl pts/stress-ng-1.11.0 --vnni -1 Test: AVX-512 VNNI pts/stress-ng-1.11.0 --qsort -1 --no-rand-seed Test: Glibc Qsort Data Sorting pts/cpuminer-opt-1.7.0 -a scrypt Algorithm: scrypt pts/stress-ng-1.11.0 --fp -1 --no-rand-seed Test: Floating Point pts/stress-ng-1.11.0 --switch -1 --no-rand-seed Test: Context Switching pts/cpuminer-opt-1.7.0 -a lbry Algorithm: LBC, LBRY Credits pts/cpuminer-opt-1.7.0 -a minotaur Algorithm: Ringcoin pts/stress-ng-1.11.0 --funccall -1 --no-rand-seed Test: Function Call pts/stress-ng-1.11.0 --vecshuf -1 --no-rand-seed Test: Vector Shuffle pts/stress-ng-1.11.0 --cpu -1 --cpu-method all --no-rand-seed Test: CPU Stress pts/openvkl-2.0.0 vklBenchmarkCPU --benchmark_filter=ispc Benchmark: vklBenchmarkCPU ISPC pts/ospray-studio-1.2.0 --cameras 3 3 --resolution 3840 2160 --spp 1 --renderer pathtracer Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/cpuminer-opt-1.7.0 -a allium Algorithm: Garlicoin pts/deepsparse-1.5.2 zoo:nlp/sentiment_analysis/oberta-base/pytorch/huggingface/sst2/pruned90_quant-none --input_shapes='[1,128]' --scenario async Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/svt-av1-2.10.0 --preset 12 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 12 - Input: Bosphorus 1080p pts/dacapobench-1.1.0 h2o Java Test: H2O In-Memory Platform For Machine Learning pts/cassandra-1.2.0 WRITE Test: Writes pts/stress-ng-1.11.0 --matrix -1 --no-rand-seed Test: Matrix Math pts/deepsparse-1.5.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned95_uniform_quant-none --scenario async Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/dacapobench-1.1.0 xalan Java Test: Apache Xalan XSLT pts/dacapobench-1.1.0 pmd Java Test: PMD Source Code Analyzer pts/onednn-3.3.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=f32 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/stress-ng-1.11.0 --fma -1 --no-rand-seed Test: Fused Multiply-Add pts/onednn-3.3.0 --ip --batch=inputs/ip/shapes_1d --cfg=bf16bf16bf16 --engine=cpu Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/svt-av1-2.10.0 --preset 12 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 12 - Input: Bosphorus 4K pts/svt-av1-2.10.0 --preset 4 -n 160 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 4 - Input: Bosphorus 1080p pts/openvino-1.4.0 -m models/intel/age-gender-recognition-retail-0013/FP16-INT8/age-gender-recognition-retail-0013.xml -d CPU Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU pts/stress-ng-1.11.0 --vecwide -1 --no-rand-seed Test: Wide Vector Math pts/openvino-1.4.0 -m models/intel/handwritten-english-recognition-0001/FP16-INT8/handwritten-english-recognition-0001.xml -d CPU Model: Handwritten English Recognition FP16-INT8 - Device: CPU pts/stress-ng-1.11.0 --pthread -1 --no-rand-seed Test: Pthread pts/stress-ng-1.11.0 --pipe -1 --no-rand-seed Test: Pipe pts/stress-ng-1.11.0 --futex -1 --no-rand-seed Test: Futex pts/openvino-1.4.0 -m models/intel/handwritten-english-recognition-0001/FP16/handwritten-english-recognition-0001.xml -d CPU Model: Handwritten English Recognition FP16 - Device: CPU pts/openvkl-2.0.0 vklBenchmarkCPU --benchmark_filter=scalar Benchmark: vklBenchmarkCPU Scalar pts/easywave-1.0.0 -grid examples/e2Asean.grd -source examples/BengkuluSept2007.flt -time 240 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 pts/ospray-studio-1.2.0 --cameras 1 1 --resolution 1920 1080 --spp 32 --renderer pathtracer Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 1 1 --resolution 3840 2160 --spp 32 --renderer pathtracer Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 1 1 --resolution 1920 1080 --spp 1 --renderer pathtracer Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 1 1 --resolution 1920 1080 --spp 16 --renderer pathtracer Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 2 2 --resolution 1920 1080 --spp 16 --renderer pathtracer Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/openvino-1.4.0 -m models/intel/face-detection-0206/FP16/face-detection-0206.xml -d CPU Model: Face Detection FP16 - Device: CPU pts/ospray-studio-1.2.0 --cameras 2 2 --resolution 1920 1080 --spp 1 --renderer pathtracer Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 2 2 --resolution 1920 1080 --spp 32 --renderer pathtracer Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/blosc-1.3.0 blosclz shuffle 8388608 Test: blosclz shuffle - Buffer Size: 8MB pts/ospray-studio-1.2.0 --cameras 1 1 --resolution 3840 2160 --spp 16 --renderer pathtracer Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 3 3 --resolution 1920 1080 --spp 16 --renderer pathtracer Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 3 3 --resolution 1920 1080 --spp 32 --renderer pathtracer Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 2 2 --resolution 3840 2160 --spp 32 --renderer pathtracer Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 3 3 --resolution 3840 2160 --spp 32 --renderer pathtracer Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 3 3 --resolution 1920 1080 --spp 1 --renderer pathtracer Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/stress-ng-1.11.0 --atomic -1 --no-rand-seed Test: Atomic pts/ospray-studio-1.2.0 --cameras 3 3 --resolution 3840 2160 --spp 16 --renderer pathtracer Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/onednn-3.3.0 --ip --batch=inputs/ip/shapes_1d --cfg=f32 --engine=cpu Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/ospray-studio-1.2.0 --cameras 2 2 --resolution 3840 2160 --spp 16 --renderer pathtracer Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/qmcpack-1.7.0 tests/molecules/H4_ae optm-linear-linemin.xml Input: H4_ae pts/openvino-1.4.0 -m models/intel/face-detection-retail-0005/FP16-INT8/face-detection-retail-0005.xml -d CPU Model: Face Detection Retail FP16-INT8 - Device: CPU pts/deepsparse-1.5.2 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario async Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned85-none --scenario async Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/openvino-1.4.0 -m models/intel/machine-translation-nar-en-de-0002/FP16/machine-translation-nar-en-de-0002.xml -d CPU Model: Machine Translation EN To DE FP16 - Device: CPU pts/openvino-1.4.0 -m models/intel/weld-porosity-detection-0001/FP16/weld-porosity-detection-0001.xml -d CPU Model: Weld Porosity Detection FP16 - Device: CPU pts/svt-av1-2.10.0 --preset 8 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 8 - Input: Bosphorus 1080p pts/cpuminer-opt-1.7.0 -a sha256q Algorithm: Quad SHA-256, Pyrite pts/openvino-1.4.0 -m models/intel/person-detection-0303/FP32/person-detection-0303.xml -d CPU Model: Person Detection FP32 - Device: CPU pts/openvino-1.4.0 -m models/intel/face-detection-0206/FP16-INT8/face-detection-0206.xml -d CPU Model: Face Detection FP16-INT8 - Device: CPU pts/openvino-1.4.0 -m models/intel/vehicle-detection-0202/FP16/vehicle-detection-0202.xml -d CPU Model: Vehicle Detection FP16 - Device: CPU pts/stress-ng-1.11.0 --memfd -1 --no-rand-seed Test: MEMFD pts/qmcpack-1.7.0 tests/molecules/O_ae_pyscf_UHF vmc_long_noj.in.xml Input: O_ae_pyscf_UHF pts/dacapobench-1.1.0 luindex Java Test: Apache Lucene Search Index pts/stress-ng-1.11.0 --rdrand -1 --no-rand-seed Test: x86_64 RdRand pts/stress-ng-1.11.0 --schedmix -1 Test: Mixed Scheduler pts/openvino-1.4.0 -m models/intel/road-segmentation-adas-0001/FP16-INT8/road-segmentation-adas-0001.xml -d CPU Model: Road Segmentation ADAS FP16-INT8 - Device: CPU pts/ffmpeg-6.1.0 --encoder=libx264 live Encoder: libx264 - Scenario: Live pts/openvino-1.4.0 -m models/intel/person-detection-0303/FP16/person-detection-0303.xml -d CPU Model: Person Detection FP16 - Device: CPU pts/stress-ng-1.11.0 --hash -1 --no-rand-seed Test: Hash pts/openvino-1.4.0 -m models/intel/person-vehicle-bike-detection-2004/FP16/person-vehicle-bike-detection-2004.xml -d CPU Model: Person Vehicle Bike Detection FP16 - Device: CPU pts/deepsparse-1.5.2 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --input_shapes='[1,128]' --scenario async Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream pts/openvino-1.4.0 -m models/intel/road-segmentation-adas-0001/FP16/road-segmentation-adas-0001.xml -d CPU Model: Road Segmentation ADAS FP16 - Device: CPU pts/deepsparse-1.5.2 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario async Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream pts/dacapobench-1.1.0 spring Java Test: Spring Boot pts/embree-1.6.0 pathtracer_ispc -c crown/crown.ecs Binary: Pathtracer ISPC - Model: Crown pts/dacapobench-1.1.0 batik Java Test: Batik SVG Toolkit pts/openvino-1.4.0 -m models/intel/vehicle-detection-0202/FP16-INT8/vehicle-detection-0202.xml -d CPU Model: Vehicle Detection FP16-INT8 - Device: CPU pts/deepsparse-1.5.2 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario async Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream pts/qmcpack-1.7.0 tests/molecules/Li2_STO_ae Li2.STO.long.in.xml Input: Li2_STO_ae pts/openvino-1.4.0 -m models/intel/weld-porosity-detection-0001/FP16-INT8/weld-porosity-detection-0001.xml -d CPU Model: Weld Porosity Detection FP16-INT8 - Device: CPU pts/deepsparse-1.5.2 zoo:nlp/sentiment_analysis/bert-base/pytorch/huggingface/sst2/12layer_pruned90-none --scenario async Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream pts/stress-ng-1.11.0 --io-uring -1 --no-rand-seed Test: IO_uring pts/qmcpack-1.7.0 tests/molecules/LiH_ae_MSD vmc_long_opt_CI.in.xml Input: LiH_ae_MSD pts/embree-1.6.0 pathtracer_ispc -c asian_dragon_obj/asian_dragon.ecs Binary: Pathtracer ISPC - Model: Asian Dragon Obj pts/embree-1.6.0 pathtracer_ispc -c asian_dragon/asian_dragon.ecs Binary: Pathtracer ISPC - Model: Asian Dragon pts/openvino-1.4.0 -m models/intel/age-gender-recognition-retail-0013/FP16/age-gender-recognition-retail-0013.xml -d CPU Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU pts/blosc-1.3.0 blosclz bitshuffle 33554432 Test: blosclz bitshuffle - Buffer Size: 32MB pts/deepsparse-1.5.2 zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/base-none --input_shapes='[1,128]' --scenario async Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream pts/blosc-1.3.0 blosclz bitshuffle 134217728 Test: blosclz bitshuffle - Buffer Size: 128MB pts/vvenc-1.9.1 -i Bosphorus_3840x2160.y4m --preset faster Video Input: Bosphorus 4K - Video Preset: Faster pts/stress-ng-1.11.0 --zlib -1 --no-rand-seed Test: Zlib pts/openvino-1.4.0 -m models/intel/face-detection-retail-0005/FP16/face-detection-retail-0005.xml -d CPU Model: Face Detection Retail FP16 - Device: CPU pts/blosc-1.3.0 blosclz shuffle 33554432 Test: blosclz shuffle - Buffer Size: 32MB pts/dacapobench-1.1.0 jython Java Test: Jython pts/stress-ng-1.11.0 --clone -1 --no-rand-seed Test: Cloning pts/dacapobench-1.1.0 h2 Java Test: H2 Database Engine pts/vvenc-1.9.1 -i Bosphorus_3840x2160.y4m --preset fast Video Input: Bosphorus 4K - Video Preset: Fast pts/dacapobench-1.1.0 avrora Java Test: Avrora AVR Simulation Framework pts/onednn-3.3.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=u8s8f32 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/quantlib-1.2.0 Configuration: Single-Threaded pts/ffmpeg-6.1.0 --encoder=libx264 platform Encoder: libx264 - Scenario: Platform pts/dacapobench-1.1.0 lusearch Java Test: Apache Lucene Search Engine pts/blosc-1.3.0 blosclz noshuffle 16777216 Test: blosclz noshuffle - Buffer Size: 16MB pts/blosc-1.3.0 blosclz shuffle 16777216 Test: blosclz shuffle - Buffer Size: 16MB pts/blosc-1.3.0 blosclz bitshuffle 8388608 Test: blosclz bitshuffle - Buffer Size: 8MB pts/quantlib-1.2.0 --mp Configuration: Multi-Threaded pts/svt-av1-2.10.0 --preset 4 -n 160 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 4 - Input: Bosphorus 4K pts/blosc-1.3.0 blosclz bitshuffle 16777216 Test: blosclz bitshuffle - Buffer Size: 16MB pts/avifenc-1.4.0 -s 0 Encoder Speed: 0 pts/deepsparse-1.5.2 zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none --scenario async Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream pts/avifenc-1.4.0 -s 2 Encoder Speed: 2 pts/dacapobench-1.1.0 fop Java Test: FOP Print Formatter pts/blosc-1.3.0 blosclz noshuffle 8388608 Test: blosclz noshuffle - Buffer Size: 8MB pts/blosc-1.3.0 blosclz shuffle 67108864 Test: blosclz shuffle - Buffer Size: 64MB pts/deepsparse-1.5.2 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario async Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream pts/blosc-1.3.0 blosclz noshuffle 268435456 Test: blosclz noshuffle - Buffer Size: 256MB pts/dacapobench-1.1.0 zxing Java Test: Zxing 1D/2D Barcode Image Processing pts/brl-cad-1.5.0 VGR Performance Metric pts/blosc-1.3.0 blosclz shuffle 134217728 Test: blosclz shuffle - Buffer Size: 128MB pts/deepsparse-1.5.2 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario async Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream pts/onednn-3.3.0 --conv --batch=inputs/conv/shapes_auto --cfg=u8s8f32 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/ffmpeg-6.1.0 --encoder=libx265 vod Encoder: libx265 - Scenario: Video On Demand pts/blosc-1.3.0 blosclz shuffle 268435456 Test: blosclz shuffle - Buffer Size: 256MB pts/svt-av1-2.10.0 --preset 13 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 13 - Input: Bosphorus 4K pts/onednn-3.3.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=f32 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/blosc-1.3.0 blosclz bitshuffle 67108864 Test: blosclz bitshuffle - Buffer Size: 64MB pts/ffmpeg-6.1.0 --encoder=libx265 platform Encoder: libx265 - Scenario: Platform pts/blosc-1.3.0 blosclz noshuffle 33554432 Test: blosclz noshuffle - Buffer Size: 32MB pts/avifenc-1.4.0 -s 10 -l Encoder Speed: 10, Lossless pts/qmcpack-1.7.0 build/examples/molecules/H2O/example_H2O-1-1 simple-H2O.xml Input: simple-H2O pts/cloverleaf-1.2.0 clover_bm64_short Input: clover_bm64_short pts/onednn-3.3.0 --ip --batch=inputs/ip/shapes_3d --cfg=u8s8f32 --engine=cpu Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/dacapobench-1.1.0 eclipse Java Test: Eclipse pts/stress-ng-1.11.0 --matrix-3d -1 --no-rand-seed Test: Matrix 3D Math pts/vvenc-1.9.1 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset fast Video Input: Bosphorus 1080p - Video Preset: Fast pts/build-gcc-1.4.0 Time To Compile pts/stress-ng-1.11.0 --tree -1 --tree-method avl --no-rand-seed Test: AVL Tree pts/blosc-1.3.0 blosclz bitshuffle 268435456 Test: blosclz bitshuffle - Buffer Size: 256MB pts/avifenc-1.4.0 -s 6 Encoder Speed: 6 pts/openradioss-1.1.1 Cell_Phone_Drop_0000.rad Cell_Phone_Drop_0001.rad Model: Cell Phone Drop Test pts/dacapobench-1.1.0 biojava Java Test: BioJava Biological Data Framework pts/build-ffmpeg-6.1.0 Time To Compile pts/ffmpeg-6.1.0 --encoder=libx264 upload Encoder: libx264 - Scenario: Upload pts/cloverleaf-1.2.0 clover_bm Input: clover_bm pts/stress-ng-1.11.0 --poll -1 --no-rand-seed Test: Poll pts/avifenc-1.4.0 -s 6 -l Encoder Speed: 6, Lossless pts/openradioss-1.1.1 BIRD_WINDSHIELD_v1_0000.rad BIRD_WINDSHIELD_v1_0001.rad Model: Bird Strike on Windshield pts/onednn-3.3.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=u8s8f32 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/ffmpeg-6.1.0 --encoder=libx265 upload Encoder: libx265 - Scenario: Upload pts/build-gem5-1.1.0 Time To Compile pts/onednn-3.3.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=bf16bf16bf16 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/dacapobench-1.1.0 tomcat Java Test: Apache Tomcat pts/stress-ng-1.11.0 --crypt -1 --no-rand-seed Test: Crypto pts/svt-av1-2.10.0 --preset 13 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 13 - Input: Bosphorus 1080p pts/onednn-3.3.0 --ip --batch=inputs/ip/shapes_3d --cfg=bf16bf16bf16 --engine=cpu Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/cpuminer-opt-1.7.0 -a m7m Algorithm: Magi pts/ospray-studio-1.2.0 --cameras 2 2 --resolution 3840 2160 --spp 1 --renderer pathtracer Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/openradioss-1.1.1 RUBBER_SEAL_IMPDISP_GEOM_0000.rad RUBBER_SEAL_IMPDISP_GEOM_0001.rad Model: Rubber O-Ring Seal Installation pts/deepsparse-1.5.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario async Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream pts/svt-av1-2.10.0 --preset 8 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 8 - Input: Bosphorus 4K pts/embree-1.6.0 pathtracer -c asian_dragon_obj/asian_dragon.ecs Binary: Pathtracer - Model: Asian Dragon Obj pts/dacapobench-1.1.0 cassandra Java Test: Apache Cassandra pts/onednn-3.3.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=f32 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/dacapobench-1.1.0 graphchi Java Test: GraphChi pts/ffmpeg-6.1.0 --encoder=libx264 vod Encoder: libx264 - Scenario: Video On Demand pts/onednn-3.3.0 --conv --batch=inputs/conv/shapes_auto --cfg=f32 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/ffmpeg-6.1.0 --encoder=libx265 live Encoder: libx265 - Scenario: Live pts/dacapobench-1.1.0 jme Java Test: jMonkeyEngine pts/deepsparse-1.5.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario async Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream pts/blosc-1.3.0 blosclz noshuffle 134217728 Test: blosclz noshuffle - Buffer Size: 128MB pts/embree-1.6.0 pathtracer -c crown/crown.ecs Binary: Pathtracer - Model: Crown pts/onednn-3.3.0 --ip --batch=inputs/ip/shapes_3d --cfg=f32 --engine=cpu Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-3.3.0 --ip --batch=inputs/ip/shapes_1d --cfg=u8s8f32 --engine=cpu Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/easywave-1.0.0 -grid examples/e2Asean.grd -source examples/BengkuluSept2007.flt -time 1200 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 pts/dacapobench-1.1.0 tradesoap Java Test: Tradesoap pts/blosc-1.3.0 blosclz noshuffle 67108864 Test: blosclz noshuffle - Buffer Size: 64MB pts/onednn-3.3.0 --conv --batch=inputs/conv/shapes_auto --cfg=bf16bf16bf16 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/vvenc-1.9.1 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset faster Video Input: Bosphorus 1080p - Video Preset: Faster pts/onednn-3.3.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=u8s8f32 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/dacapobench-1.1.0 kafka Java Test: Apache Kafka pts/embree-1.6.0 pathtracer -c asian_dragon/asian_dragon.ecs Binary: Pathtracer - Model: Asian Dragon pts/onednn-3.3.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=bf16bf16bf16 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.3.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=bf16bf16bf16 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.3.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=bf16bf16bf16 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.3.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=f32 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-3.3.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=u8s8f32 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/cpuminer-opt-1.7.0 -a sha256t Algorithm: Triple SHA-256, Onecoin pts/oidn-2.1.0 -r RTLightmap.hdr.4096x4096 -d cpu Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only pts/oidn-2.1.0 -r RT.ldr_alb_nrm.3840x2160 -d cpu Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only pts/oidn-2.1.0 -r RT.hdr_alb_nrm.3840x2160 -d cpu Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only pts/rabbitmq-1.1.1 --queue-pattern 'perf-test-%d' --queue-pattern-from 1 --queue-pattern-to 200 --producers 400 --consumers 400 -s 8000 Scenario: 200 Queues, 400 Producers, 400 Consumers pts/rabbitmq-1.1.1 --queue-pattern 'perf-test-%d' --queue-pattern-from 1 --queue-pattern-to 120 --producers 400 --consumers 400 -s 8000 Scenario: 120 Queues, 400 Producers, 400 Consumers pts/rabbitmq-1.1.1 --queue-pattern 'perf-test-%d' --queue-pattern-from 1 --queue-pattern-to 60 --producers 100 --consumers 100 -s 8000 Scenario: 60 Queues, 100 Producers, 100 Consumers pts/rabbitmq-1.1.1 --queue-pattern 'perf-test-%d' --queue-pattern-from 1 --queue-pattern-to 10 --producers 100 --consumers 100 -s 8000 Scenario: 10 Queues, 100 Producers, 100 Consumers pts/rabbitmq-1.1.1 -x 2 -y 4 -u "throughput-test-2" -a --id "test 2" -s 8000 Scenario: Simple 2 Publishers + 4 Consumers