okt

AMD Ryzen 9 3900XT 12-Core testing with a MSI MEG X570 GODLIKE (MS-7C34) v1.0 (1.B3 BIOS) and AMD Radeon RX 56/64 8GB on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2310319-NE-OKT13575789
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results

Limit displaying results to tests within:

AV1 3 Tests
BLAS (Basic Linear Algebra Sub-Routine) Tests 3 Tests
C++ Boost Tests 4 Tests
Timed Code Compilation 6 Tests
C/C++ Compiler Tests 7 Tests
CPU Massive 15 Tests
Creator Workloads 15 Tests
Cryptography 2 Tests
Database Test Suite 6 Tests
Encoding 5 Tests
Fortran Tests 3 Tests
Game Development 4 Tests
HPC - High Performance Computing 13 Tests
Java Tests 3 Tests
Common Kernel Benchmarks 3 Tests
Machine Learning 6 Tests
Molecular Dynamics 2 Tests
MPI Benchmarks 3 Tests
Multi-Core 22 Tests
NVIDIA GPU Compute 2 Tests
Intel oneAPI 7 Tests
OpenMPI Tests 8 Tests
Programmer / Developer System Benchmarks 6 Tests
Python Tests 11 Tests
Raytracing 2 Tests
Renderers 3 Tests
Scientific Computing 4 Tests
Software Defined Radio 2 Tests
Server 9 Tests
Server CPU Tests 9 Tests
Video Encoding 4 Tests
Common Workstation Benchmarks 2 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
October 30 2023
  10 Hours, 59 Minutes
b
October 30 2023
  11 Hours, 3 Minutes
Invert Hiding All Results Option
  11 Hours, 1 Minute
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


okt Suite 1.0.0 System Test suite extracted from okt. pts/sqlite-2.2.0 1 Threads / Copies: 1 pts/sqlite-2.2.0 2 Threads / Copies: 2 pts/sqlite-2.2.0 4 Threads / Copies: 4 pts/sqlite-2.2.0 8 Threads / Copies: 8 pts/3dmark-1.0.0 1920 1080 Resolution: 1920 x 1080 pts/quantlib-1.2.0 --mp Configuration: Multi-Threaded pts/quantlib-1.2.0 Configuration: Single-Threaded pts/cryptopp-1.1.0 b1 6 Test: Unkeyed Algorithms pts/hpcg-1.3.0 --nx=104 --ny=104 --nz=104 --rt=60 X Y Z: 104 104 104 - RT: 60 pts/cloverleaf-1.2.0 clover_bm Input: clover_bm pts/cloverleaf-1.2.0 clover_bm64_short Input: clover_bm64_short pts/cp2k-1.4.1 -i benchmarks/QS/H2O-64.inp Input: H20-64 pts/cp2k-1.4.1 -i benchmarks/QS_DM_LS/H2O-dft-ls.inp Input: H2O-DFT-LS pts/cp2k-1.4.1 -i benchmarks/Fayalite-FIST/fayalite.inp Input: Fayalite-FIST pts/libxsmm-1.0.1 128 128 128 M N K: 128 pts/libxsmm-1.0.1 32 32 32 M N K: 32 pts/libxsmm-1.0.1 64 64 64 M N K: 64 pts/palabos-1.0.0 100 Grid Size: 100 pts/qmcpack-1.7.0 tests/molecules/H4_ae optm-linear-linemin.xml Input: H4_ae pts/qmcpack-1.7.0 tests/molecules/Li2_STO_ae Li2.STO.long.in.xml Input: Li2_STO_ae pts/qmcpack-1.7.0 tests/molecules/LiH_ae_MSD vmc_long_opt_CI.in.xml Input: LiH_ae_MSD pts/qmcpack-1.7.0 build/examples/molecules/H2O/example_H2O-1-1 simple-H2O.xml Input: simple-H2O pts/qmcpack-1.7.0 tests/molecules/O_ae_pyscf_UHF vmc_long_noj.in.xml Input: O_ae_pyscf_UHF pts/qmcpack-1.7.0 tests/molecules/FeCO6_b3lyp_gms vmc_long_noj.in.xml Input: FeCO6_b3lyp_gms pts/openradioss-1.1.1 Bumper_Beam_AP_meshed_0000.rad Bumper_Beam_AP_meshed_0001.rad Model: Bumper Beam pts/openradioss-1.1.1 NEON1M11_0000.rad NEON1M11_0001.rad Model: Chrysler Neon 1M pts/openradioss-1.1.1 Cell_Phone_Drop_0000.rad Cell_Phone_Drop_0001.rad Model: Cell Phone Drop Test pts/openradioss-1.1.1 BIRD_WINDSHIELD_v1_0000.rad BIRD_WINDSHIELD_v1_0001.rad Model: Bird Strike on Windshield pts/openradioss-1.1.1 RUBBER_SEAL_IMPDISP_GEOM_0000.rad RUBBER_SEAL_IMPDISP_GEOM_0001.rad Model: Rubber O-Ring Seal Installation pts/openradioss-1.1.1 fsi_drop_container_0000.rad fsi_drop_container_0001.rad Model: INIVOL and Fluid Structure Interaction Drop Container pts/z3-1.0.0 1.smt2 SMT File: 1.smt2 pts/z3-1.0.0 2.smt2 SMT File: 2.smt2 pts/nekrs-1.1.0 kershaw kershaw.par Input: Kershaw pts/nekrs-1.1.0 turbPipePeriodic turbPipe.par Input: TurboPipe Periodic pts/srsran-2.1.0 tests/benchmarks/phy/upper/downlink_processor_benchmark -R 50000 -P pdsch_scs15_50MHz_256qam_max Test: Downlink Processor Benchmark pts/srsran-2.1.0 tests/benchmarks/phy/upper/channel_processors/pusch_processor_benchmark -m throughput_total -R 100 -P pusch_scs15_50MHz_256qam_max Test: PUSCH Processor Benchmark, Throughput Total pts/srsran-2.1.0 tests/benchmarks/phy/upper/channel_processors/pusch_processor_benchmark -m throughput_thread -R 100 -P pusch_scs15_50MHz_256qam_max -T 1 Test: PUSCH Processor Benchmark, Throughput Thread pts/easywave-1.0.0 -grid examples/e2Asean.grd -source examples/BengkuluSept2007.flt -time 240 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 pts/easywave-1.0.0 -grid examples/e2Asean.grd -source examples/BengkuluSept2007.flt -time 1200 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 pts/dav1d-1.14.0 -i chimera_8b_1080p.ivf Video Input: Chimera 1080p pts/dav1d-1.14.0 -i summer_nature_4k.ivf Video Input: Summer Nature 4K pts/dav1d-1.14.0 -i summer_nature_1080p.ivf Video Input: Summer Nature 1080p pts/dav1d-1.14.0 -i chimera_10b_1080p.ivf Video Input: Chimera 1080p 10-bit pts/embree-1.6.0 pathtracer -c crown/crown.ecs Binary: Pathtracer - Model: Crown pts/embree-1.6.0 pathtracer_ispc -c crown/crown.ecs Binary: Pathtracer ISPC - Model: Crown pts/embree-1.6.0 pathtracer -c asian_dragon/asian_dragon.ecs Binary: Pathtracer - Model: Asian Dragon pts/embree-1.6.0 pathtracer -c asian_dragon_obj/asian_dragon.ecs Binary: Pathtracer - Model: Asian Dragon Obj pts/embree-1.6.0 pathtracer_ispc -c asian_dragon/asian_dragon.ecs Binary: Pathtracer ISPC - Model: Asian Dragon pts/embree-1.6.0 pathtracer_ispc -c asian_dragon_obj/asian_dragon.ecs Binary: Pathtracer ISPC - Model: Asian Dragon Obj pts/svt-av1-2.10.0 --preset 4 -n 160 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 4 - Input: Bosphorus 4K pts/svt-av1-2.10.0 --preset 8 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 8 - Input: Bosphorus 4K pts/svt-av1-2.10.0 --preset 12 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 12 - Input: Bosphorus 4K pts/svt-av1-2.10.0 --preset 13 -i Bosphorus_3840x2160.y4m -w 3840 -h 2160 Encoder Mode: Preset 13 - Input: Bosphorus 4K pts/svt-av1-2.10.0 --preset 4 -n 160 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 4 - Input: Bosphorus 1080p pts/svt-av1-2.10.0 --preset 8 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 8 - Input: Bosphorus 1080p pts/svt-av1-2.10.0 --preset 12 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 12 - Input: Bosphorus 1080p pts/svt-av1-2.10.0 --preset 13 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.yuv -w 1920 -h 1080 Encoder Mode: Preset 13 - Input: Bosphorus 1080p pts/vvenc-1.9.1 -i Bosphorus_3840x2160.y4m --preset fast Video Input: Bosphorus 4K - Video Preset: Fast pts/vvenc-1.9.1 -i Bosphorus_3840x2160.y4m --preset faster Video Input: Bosphorus 4K - Video Preset: Faster pts/vvenc-1.9.1 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset fast Video Input: Bosphorus 1080p - Video Preset: Fast pts/vvenc-1.9.1 -i Bosphorus_1920x1080_120fps_420_8bit_YUV.y4m --preset faster Video Input: Bosphorus 1080p - Video Preset: Faster pts/oidn-2.1.0 -r RT.hdr_alb_nrm.3840x2160 -d cpu Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Only pts/oidn-2.1.0 -r RT.ldr_alb_nrm.3840x2160 -d cpu Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Only pts/oidn-2.1.0 -r RTLightmap.hdr.4096x4096 -d cpu Run: RTLightmap.hdr.4096x4096 - Device: CPU-Only pts/openvkl-2.0.0 vklBenchmarkCPU --benchmark_filter=ispc Benchmark: vklBenchmarkCPU ISPC pts/openvkl-2.0.0 vklBenchmarkCPU --benchmark_filter=scalar Benchmark: vklBenchmarkCPU Scalar pts/ospray-2.12.0 --benchmark_filter=particle_volume/ao/real_time Benchmark: particle_volume/ao/real_time pts/ospray-2.12.0 --benchmark_filter=particle_volume/scivis/real_time Benchmark: particle_volume/scivis/real_time pts/ospray-2.12.0 --benchmark_filter=particle_volume/pathtracer/real_time Benchmark: particle_volume/pathtracer/real_time pts/ospray-2.12.0 --benchmark_filter=gravity_spheres_volume/dim_512/ao/real_time Benchmark: gravity_spheres_volume/dim_512/ao/real_time pts/ospray-2.12.0 --benchmark_filter=gravity_spheres_volume/dim_512/scivis/real_time Benchmark: gravity_spheres_volume/dim_512/scivis/real_time pts/ospray-2.12.0 --benchmark_filter=gravity_spheres_volume/dim_512/pathtracer/real_time Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time pts/avifenc-1.4.0 -s 0 Encoder Speed: 0 pts/avifenc-1.4.0 -s 2 Encoder Speed: 2 pts/avifenc-1.4.0 -s 6 Encoder Speed: 6 pts/avifenc-1.4.0 -s 6 -l Encoder Speed: 6, Lossless pts/avifenc-1.4.0 -s 10 -l Encoder Speed: 10, Lossless pts/build-gcc-1.4.0 Time To Compile pts/build-gem5-1.1.0 Time To Compile pts/build-godot-4.0.0 Time To Compile pts/build-llvm-1.5.0 Ninja Build System: Ninja pts/build-llvm-1.5.0 Build System: Unix Makefiles pts/build-nodejs-1.3.0 Time To Compile pts/build2-1.2.0 Time To Compile pts/onednn-3.3.0 --ip --batch=inputs/ip/shapes_1d --cfg=f32 --engine=cpu Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU pts/onednn-3.3.0 --ip --batch=inputs/ip/shapes_3d --cfg=f32 --engine=cpu Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU pts/onednn-3.3.0 --ip --batch=inputs/ip/shapes_1d --cfg=u8s8f32 --engine=cpu Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.3.0 --ip --batch=inputs/ip/shapes_3d --cfg=u8s8f32 --engine=cpu Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.3.0 --ip --batch=inputs/ip/shapes_1d --cfg=bf16bf16bf16 --engine=cpu Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.3.0 --ip --batch=inputs/ip/shapes_3d --cfg=bf16bf16bf16 --engine=cpu Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.3.0 --conv --batch=inputs/conv/shapes_auto --cfg=f32 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU pts/onednn-3.3.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=f32 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU pts/onednn-3.3.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=f32 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU pts/onednn-3.3.0 --conv --batch=inputs/conv/shapes_auto --cfg=u8s8f32 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.3.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=u8s8f32 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.3.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=u8s8f32 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.3.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=f32 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU pts/onednn-3.3.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=f32 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU pts/onednn-3.3.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=u8s8f32 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.3.0 --conv --batch=inputs/conv/shapes_auto --cfg=bf16bf16bf16 --engine=cpu Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.3.0 --deconv --batch=inputs/deconv/shapes_1d --cfg=bf16bf16bf16 --engine=cpu Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.3.0 --deconv --batch=inputs/deconv/shapes_3d --cfg=bf16bf16bf16 --engine=cpu Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.3.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=u8s8f32 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPU pts/onednn-3.3.0 --rnn --batch=inputs/rnn/perf_rnn_training --cfg=bf16bf16bf16 --engine=cpu Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPU pts/onednn-3.3.0 --rnn --batch=inputs/rnn/perf_rnn_inference_lb --cfg=bf16bf16bf16 --engine=cpu Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPU pts/ospray-studio-1.2.0 --cameras 1 1 --resolution 3840 2160 --spp 1 --renderer pathtracer Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 2 2 --resolution 3840 2160 --spp 1 --renderer pathtracer Camera: 2 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 3 3 --resolution 3840 2160 --spp 1 --renderer pathtracer Camera: 3 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 1 1 --resolution 3840 2160 --spp 16 --renderer pathtracer Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 1 1 --resolution 3840 2160 --spp 32 --renderer pathtracer Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 2 2 --resolution 3840 2160 --spp 16 --renderer pathtracer Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 2 2 --resolution 3840 2160 --spp 32 --renderer pathtracer Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 3 3 --resolution 3840 2160 --spp 16 --renderer pathtracer Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 3 3 --resolution 3840 2160 --spp 32 --renderer pathtracer Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 1 1 --resolution 1920 1080 --spp 1 --renderer pathtracer Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 2 2 --resolution 1920 1080 --spp 1 --renderer pathtracer Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 3 3 --resolution 1920 1080 --spp 1 --renderer pathtracer Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 1 1 --resolution 1920 1080 --spp 16 --renderer pathtracer Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 1 1 --resolution 1920 1080 --spp 32 --renderer pathtracer Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 2 2 --resolution 1920 1080 --spp 16 --renderer pathtracer Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 2 2 --resolution 1920 1080 --spp 32 --renderer pathtracer Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 3 3 --resolution 1920 1080 --spp 16 --renderer pathtracer Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU pts/ospray-studio-1.2.0 --cameras 3 3 --resolution 1920 1080 --spp 32 --renderer pathtracer Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU pts/encode-opus-1.4.0 WAV To Opus Encode pts/cpuminer-opt-1.7.0 -a m7m Algorithm: Magi pts/cpuminer-opt-1.7.0 -a scrypt Algorithm: scrypt pts/cpuminer-opt-1.7.0 -a deep Algorithm: Deepcoin pts/cpuminer-opt-1.7.0 -a minotaur Algorithm: Ringcoin pts/cpuminer-opt-1.7.0 -a blake2s Algorithm: Blake-2 S pts/cpuminer-opt-1.7.0 -a allium Algorithm: Garlicoin pts/cpuminer-opt-1.7.0 -a skein Algorithm: Skeincoin pts/cpuminer-opt-1.7.0 -a myr-gr Algorithm: Myriad-Groestl pts/cpuminer-opt-1.7.0 -a lbry Algorithm: LBC, LBRY Credits pts/cpuminer-opt-1.7.0 -a sha256q Algorithm: Quad SHA-256, Pyrite pts/cpuminer-opt-1.7.0 -a sha256t Algorithm: Triple SHA-256, Onecoin pts/liquid-dsp-1.6.0 -n 1 -b 256 -f 32 Threads: 1 - Buffer Length: 256 - Filter Length: 32 pts/liquid-dsp-1.6.0 -n 1 -b 256 -f 57 Threads: 1 - Buffer Length: 256 - Filter Length: 57 pts/liquid-dsp-1.6.0 -n 2 -b 256 -f 32 Threads: 2 - Buffer Length: 256 - Filter Length: 32 pts/liquid-dsp-1.6.0 -n 2 -b 256 -f 57 Threads: 2 - Buffer Length: 256 - Filter Length: 57 pts/liquid-dsp-1.6.0 -n 4 -b 256 -f 32 Threads: 4 - Buffer Length: 256 - Filter Length: 32 pts/liquid-dsp-1.6.0 -n 4 -b 256 -f 57 Threads: 4 - Buffer Length: 256 - Filter Length: 57 pts/liquid-dsp-1.6.0 -n 8 -b 256 -f 32 Threads: 8 - Buffer Length: 256 - Filter Length: 32 pts/liquid-dsp-1.6.0 -n 8 -b 256 -f 57 Threads: 8 - Buffer Length: 256 - Filter Length: 57 pts/liquid-dsp-1.6.0 -n 1 -b 256 -f 512 Threads: 1 - Buffer Length: 256 - Filter Length: 512 pts/liquid-dsp-1.6.0 -n 16 -b 256 -f 32 Threads: 16 - Buffer Length: 256 - Filter Length: 32 pts/liquid-dsp-1.6.0 -n 16 -b 256 -f 57 Threads: 16 - Buffer Length: 256 - Filter Length: 57 pts/liquid-dsp-1.6.0 -n 2 -b 256 -f 512 Threads: 2 - Buffer Length: 256 - Filter Length: 512 pts/liquid-dsp-1.6.0 -n 24 -b 256 -f 32 Threads: 24 - Buffer Length: 256 - Filter Length: 32 pts/liquid-dsp-1.6.0 -n 24 -b 256 -f 57 Threads: 24 - Buffer Length: 256 - Filter Length: 57 pts/liquid-dsp-1.6.0 -n 4 -b 256 -f 512 Threads: 4 - Buffer Length: 256 - Filter Length: 512 pts/liquid-dsp-1.6.0 -n 8 -b 256 -f 512 Threads: 8 - Buffer Length: 256 - Filter Length: 512 pts/liquid-dsp-1.6.0 -n 16 -b 256 -f 512 Threads: 16 - Buffer Length: 256 - Filter Length: 512 pts/liquid-dsp-1.6.0 -n 24 -b 256 -f 512 Threads: 24 - Buffer Length: 256 - Filter Length: 512 pts/apache-iotdb-1.2.0 800 1 200 100 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 100 pts/apache-iotdb-1.2.0 800 1 200 400 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 200 - Client Number: 400 pts/apache-iotdb-1.2.0 800 1 500 100 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 100 pts/apache-iotdb-1.2.0 800 1 500 400 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 500 - Client Number: 400 pts/apache-iotdb-1.2.0 800 1 800 100 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 100 pts/apache-iotdb-1.2.0 800 1 800 400 Device Count: 800 - Batch Size Per Write: 1 - Sensor Count: 800 - Client Number: 400 pts/apache-iotdb-1.2.0 800 100 200 100 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 100 pts/apache-iotdb-1.2.0 800 100 200 400 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 200 - Client Number: 400 pts/apache-iotdb-1.2.0 800 100 500 100 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 100 pts/apache-iotdb-1.2.0 800 100 500 400 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 500 - Client Number: 400 pts/apache-iotdb-1.2.0 800 100 800 100 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 100 pts/apache-iotdb-1.2.0 800 100 800 400 Device Count: 800 - Batch Size Per Write: 100 - Sensor Count: 800 - Client Number: 400 pts/memcached-1.2.0 --ratio=1:10 Set To Get Ratio: 1:10 pts/memcached-1.2.0 --ratio=1:100 Set To Get Ratio: 1:100 pts/duckdb-1.0.0 benchmark/imdb Benchmark: IMDB pts/duckdb-1.0.0 benchmark/tpch/parquet Benchmark: TPC-H Parquet pts/pgbench-1.14.0 -s 100 -c 1000 -S Scaling Factor: 100 - Clients: 1000 - Mode: Read Only pts/pgbench-1.14.0 -s 100 -c 1000 -S Scaling Factor: 100 - Clients: 1000 - Mode: Read Only - Average Latency pts/pgbench-1.14.0 -s 100 -c 1000 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write pts/pgbench-1.14.0 -s 100 -c 1000 Scaling Factor: 100 - Clients: 1000 - Mode: Read Write - Average Latency pts/tensorflow-2.1.0 --device cpu --batch_size=16 --model=resnet50 Device: CPU - Batch Size: 16 - Model: ResNet-50 pts/tensorflow-2.1.0 --device cpu --batch_size=32 --model=resnet50 Device: CPU - Batch Size: 32 - Model: ResNet-50 pts/deepsparse-1.5.2 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario async Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/document_classification/obert-base/pytorch/huggingface/imdb/base-none --scenario sync Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:nlp/sentiment_analysis/oberta-base/pytorch/huggingface/sst2/pruned90_quant-none --input_shapes='[1,128]' --scenario async Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/sentiment_analysis/oberta-base/pytorch/huggingface/sst2/pruned90_quant-none --input_shapes='[1,128]' --scenario sync Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:nlp/sentiment_analysis/bert-base/pytorch/huggingface/sst2/12layer_pruned90-none --scenario async Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/sentiment_analysis/bert-base/pytorch/huggingface/sst2/12layer_pruned90-none --scenario sync Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario async Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/question_answering/bert-base/pytorch/huggingface/squad/12layer_pruned90-none --scenario sync Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario async Model: ResNet-50, Baseline - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario sync Model: ResNet-50, Baseline - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned95_uniform_quant-none --scenario async Model: ResNet-50, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned95_uniform_quant-none --scenario sync Model: ResNet-50, Sparse INT8 - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario async Model: CV Detection, YOLOv5s COCO - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --scenario sync Model: CV Detection, YOLOv5s COCO - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/base-none --input_shapes='[1,128]' --scenario async Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/base-none --input_shapes='[1,128]' --scenario sync Model: BERT-Large, NLP Question Answering - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario async Model: CV Classification, ResNet-50 ImageNet - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none --scenario sync Model: CV Classification, ResNet-50 ImageNet - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned85-none --scenario async Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned85-none --scenario sync Model: CV Detection, YOLOv5s COCO, Sparse INT8 - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario async Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/text_classification/distilbert-none/pytorch/huggingface/mnli/base-none --scenario sync Model: NLP Text Classification, DistilBERT mnli - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none --scenario async Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:cv/segmentation/yolact-darknet53/pytorch/dbolya/coco/pruned90-none --scenario sync Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/pruned97_quant-none --input_shapes='[1,128]' --scenario async Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/question_answering/obert-large/pytorch/huggingface/squad/pruned97_quant-none --input_shapes='[1,128]' --scenario sync Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --input_shapes='[1,128]' --scenario async Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/text_classification/bert-base/pytorch/huggingface/sst2/base-none --input_shapes='[1,128]' --scenario sync Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Synchronous Single-Stream pts/deepsparse-1.5.2 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario async Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream pts/deepsparse-1.5.2 zoo:nlp/token_classification/bert-base/pytorch/huggingface/conll2003/base-none --scenario sync Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Synchronous Single-Stream pts/stress-ng-1.11.0 --hash -1 --no-rand-seed Test: Hash pts/stress-ng-1.11.0 --mmap -1 --no-rand-seed Test: MMAP pts/stress-ng-1.11.0 --numa -1 --no-rand-seed Test: NUMA pts/stress-ng-1.11.0 --pipe -1 --no-rand-seed Test: Pipe pts/stress-ng-1.11.0 --poll -1 --no-rand-seed Test: Poll pts/stress-ng-1.11.0 --zlib -1 --no-rand-seed Test: Zlib pts/stress-ng-1.11.0 --futex -1 --no-rand-seed Test: Futex pts/stress-ng-1.11.0 --memfd -1 --no-rand-seed Test: MEMFD pts/stress-ng-1.11.0 --mutex -1 --no-rand-seed Test: Mutex pts/stress-ng-1.11.0 --atomic -1 --no-rand-seed Test: Atomic pts/stress-ng-1.11.0 --crypt -1 --no-rand-seed Test: Crypto pts/stress-ng-1.11.0 --malloc -1 --no-rand-seed Test: Malloc pts/stress-ng-1.11.0 --clone -1 --no-rand-seed Test: Cloning pts/stress-ng-1.11.0 --fork -1 --no-rand-seed Test: Forking pts/stress-ng-1.11.0 --pthread -1 --no-rand-seed Test: Pthread pts/stress-ng-1.11.0 --tree -1 --tree-method avl --no-rand-seed Test: AVL Tree pts/stress-ng-1.11.0 --io-uring -1 --no-rand-seed Test: IO_uring pts/stress-ng-1.11.0 --sendfile -1 --no-rand-seed Test: SENDFILE pts/stress-ng-1.11.0 --cache -1 --no-rand-seed Test: CPU Cache pts/stress-ng-1.11.0 --cpu -1 --cpu-method all --no-rand-seed Test: CPU Stress pts/stress-ng-1.11.0 --sem -1 --no-rand-seed Test: Semaphores pts/stress-ng-1.11.0 --matrix -1 --no-rand-seed Test: Matrix Math pts/stress-ng-1.11.0 --vecmath -1 --no-rand-seed Test: Vector Math pts/stress-ng-1.11.0 --vnni -1 Test: AVX-512 VNNI pts/stress-ng-1.11.0 --funccall -1 --no-rand-seed Test: Function Call pts/stress-ng-1.11.0 --rdrand -1 --no-rand-seed Test: x86_64 RdRand pts/stress-ng-1.11.0 --fp -1 --no-rand-seed Test: Floating Point pts/stress-ng-1.11.0 --matrix-3d -1 --no-rand-seed Test: Matrix 3D Math pts/stress-ng-1.11.0 --memcpy -1 --no-rand-seed Test: Memory Copying pts/stress-ng-1.11.0 --vecshuf -1 --no-rand-seed Test: Vector Shuffle pts/stress-ng-1.11.0 --schedmix -1 Test: Mixed Scheduler pts/stress-ng-1.11.0 --sock -1 --no-rand-seed --sock-zerocopy Test: Socket Activity pts/stress-ng-1.11.0 --vecwide -1 --no-rand-seed Test: Wide Vector Math pts/stress-ng-1.11.0 --switch -1 --no-rand-seed Test: Context Switching pts/stress-ng-1.11.0 --fma -1 --no-rand-seed Test: Fused Multiply-Add pts/stress-ng-1.11.0 --vecfp -1 --no-rand-seed Test: Vector Floating Point pts/stress-ng-1.11.0 --str -1 --no-rand-seed Test: Glibc C String Functions pts/stress-ng-1.11.0 --qsort -1 --no-rand-seed Test: Glibc Qsort Data Sorting pts/stress-ng-1.11.0 --msg -1 --no-rand-seed Test: System V Message Passing pts/gpaw-1.2.0 carbon-nanotube Input: Carbon Nanotube pts/ncnn-1.5.0 -1 Target: CPU - Model: mobilenet pts/ncnn-1.5.0 -1 Target: CPU-v2-v2 - Model: mobilenet-v2 pts/ncnn-1.5.0 -1 Target: CPU-v3-v3 - Model: mobilenet-v3 pts/ncnn-1.5.0 -1 Target: CPU - Model: shufflenet-v2 pts/ncnn-1.5.0 -1 Target: CPU - Model: mnasnet pts/ncnn-1.5.0 -1 Target: CPU - Model: efficientnet-b0 pts/ncnn-1.5.0 -1 Target: CPU - Model: blazeface pts/ncnn-1.5.0 -1 Target: CPU - Model: googlenet pts/ncnn-1.5.0 -1 Target: CPU - Model: vgg16 pts/ncnn-1.5.0 -1 Target: CPU - Model: resnet18 pts/ncnn-1.5.0 -1 Target: CPU - Model: alexnet pts/ncnn-1.5.0 -1 Target: CPU - Model: resnet50 pts/ncnn-1.5.0 -1 Target: CPU - Model: yolov4-tiny pts/ncnn-1.5.0 -1 Target: CPU - Model: squeezenet_ssd pts/ncnn-1.5.0 -1 Target: CPU - Model: regnety_400m pts/ncnn-1.5.0 -1 Target: CPU - Model: vision_transformer pts/ncnn-1.5.0 -1 Target: CPU - Model: FastestDet pts/blender-3.6.0 -b ../bmw27_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: BMW27 - Compute: CPU-Only pts/blender-3.6.0 -b ../classroom_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Classroom - Compute: CPU-Only pts/blender-3.6.0 -b ../fishy_cat_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Fishy Cat - Compute: CPU-Only pts/blender-3.6.0 -b ../barbershop_interior_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Barbershop - Compute: CPU-Only pts/blender-3.6.0 -b ../pavillon_barcelone_gpu.blend -o output.test -x 1 -F JPEG -f 1 -- --cycles-device CPU Blend File: Pabellon Barcelona - Compute: CPU-Only pts/openvino-1.4.0 -m models/intel/face-detection-0206/FP16/face-detection-0206.xml -d CPU Model: Face Detection FP16 - Device: CPU pts/openvino-1.4.0 -m models/intel/person-detection-0303/FP16/person-detection-0303.xml -d CPU Model: Person Detection FP16 - Device: CPU pts/openvino-1.4.0 -m models/intel/person-detection-0303/FP32/person-detection-0303.xml -d CPU Model: Person Detection FP32 - Device: CPU pts/openvino-1.4.0 -m models/intel/vehicle-detection-0202/FP16/vehicle-detection-0202.xml -d CPU Model: Vehicle Detection FP16 - Device: CPU pts/openvino-1.4.0 -m models/intel/face-detection-0206/FP16-INT8/face-detection-0206.xml -d CPU Model: Face Detection FP16-INT8 - Device: CPU pts/openvino-1.4.0 -m models/intel/face-detection-retail-0005/FP16/face-detection-retail-0005.xml -d CPU Model: Face Detection Retail FP16 - Device: CPU pts/openvino-1.4.0 -m models/intel/road-segmentation-adas-0001/FP16/road-segmentation-adas-0001.xml -d CPU Model: Road Segmentation ADAS FP16 - Device: CPU pts/openvino-1.4.0 -m models/intel/vehicle-detection-0202/FP16-INT8/vehicle-detection-0202.xml -d CPU Model: Vehicle Detection FP16-INT8 - Device: CPU pts/openvino-1.4.0 -m models/intel/weld-porosity-detection-0001/FP16/weld-porosity-detection-0001.xml -d CPU Model: Weld Porosity Detection FP16 - Device: CPU pts/openvino-1.4.0 -m models/intel/face-detection-retail-0005/FP16-INT8/face-detection-retail-0005.xml -d CPU Model: Face Detection Retail FP16-INT8 - Device: CPU pts/openvino-1.4.0 -m models/intel/road-segmentation-adas-0001/FP16-INT8/road-segmentation-adas-0001.xml -d CPU Model: Road Segmentation ADAS FP16-INT8 - Device: CPU pts/openvino-1.4.0 -m models/intel/machine-translation-nar-en-de-0002/FP16/machine-translation-nar-en-de-0002.xml -d CPU Model: Machine Translation EN To DE FP16 - Device: CPU pts/openvino-1.4.0 -m models/intel/weld-porosity-detection-0001/FP16-INT8/weld-porosity-detection-0001.xml -d CPU Model: Weld Porosity Detection FP16-INT8 - Device: CPU pts/openvino-1.4.0 -m models/intel/person-vehicle-bike-detection-2004/FP16/person-vehicle-bike-detection-2004.xml -d CPU Model: Person Vehicle Bike Detection FP16 - Device: CPU pts/openvino-1.4.0 -m models/intel/handwritten-english-recognition-0001/FP16/handwritten-english-recognition-0001.xml -d CPU Model: Handwritten English Recognition FP16 - Device: CPU pts/openvino-1.4.0 -m models/intel/age-gender-recognition-retail-0013/FP16/age-gender-recognition-retail-0013.xml -d CPU Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU pts/openvino-1.4.0 -m models/intel/handwritten-english-recognition-0001/FP16-INT8/handwritten-english-recognition-0001.xml -d CPU Model: Handwritten English Recognition FP16-INT8 - Device: CPU pts/openvino-1.4.0 -m models/intel/age-gender-recognition-retail-0013/FP16-INT8/age-gender-recognition-retail-0013.xml -d CPU Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU pts/cassandra-1.2.0 WRITE Test: Writes pts/hadoop-1.0.0 -op create -threads 20 -files 100000 Operation: Create - Threads: 20 - Files: 100000 pts/nginx-3.0.1 -c 100 Connections: 100 pts/nginx-3.0.1 -c 200 Connections: 200 pts/nginx-3.0.1 -c 500 Connections: 500 pts/nginx-3.0.1 -c 1000 Connections: 1000 pts/apache-3.0.0 -c 100 Concurrent Requests: 100 pts/apache-3.0.0 -c 200 Concurrent Requests: 200 pts/apache-3.0.0 -c 500 Concurrent Requests: 500 pts/apache-3.0.0 -c 1000 Concurrent Requests: 1000 pts/whisper-cpp-1.0.0 -m models/ggml-base.en.bin -f ../2016-state-of-the-union.wav Model: ggml-base.en - Input: 2016 State of the Union pts/whisper-cpp-1.0.0 -m models/ggml-small.en.bin -f ../2016-state-of-the-union.wav Model: ggml-small.en - Input: 2016 State of the Union pts/whisper-cpp-1.0.0 -m models/ggml-medium.en.bin -f ../2016-state-of-the-union.wav Model: ggml-medium.en - Input: 2016 State of the Union pts/brl-cad-1.5.0 VGR Performance Metric