dddd

AMD Ryzen 7 7840HS testing with a Framework Laptop 16 (AMD Ryzen 7040 ) FRANMZCP07 (03.01 BIOS) and AMD Radeon 780M 512MB on Ubuntu 24.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2411045-NE-DDDD1454570
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

C/C++ Compiler Tests 2 Tests
CPU Massive 5 Tests
Creator Workloads 4 Tests
Encoding 2 Tests
HPC - High Performance Computing 6 Tests
Machine Learning 3 Tests
Molecular Dynamics 2 Tests
Multi-Core 6 Tests
Python Tests 3 Tests
Scientific Computing 2 Tests
Server CPU Tests 5 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
November 03
  5 Hours, 52 Minutes
aa
November 04
  4 Hours, 2 Minutes
b
November 04
  4 Hours, 14 Minutes
d
November 04
  4 Hours, 13 Minutes
Invert Hiding All Results Option
  4 Hours, 35 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


ddddOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 7 7840HS @ 5.29GHz (8 Cores / 16 Threads)Framework Laptop 16 (AMD Ryzen 7040 ) FRANMZCP07 (03.01 BIOS)AMD Device 14e82 x 8GB DDR5-5600MT/s A-DATA AD5S56008G-B512GB Western Digital PC SN810 SDCPNRY-512GAMD Radeon 780M 512MBAMD Navi 31 HDMI/DPMEDIATEK MT7922 802.11ax PCIUbuntu 24.046.8.0-39-generic (x86_64)6.8.0-40-generic (x86_64)GNOME Shell 46.0X Server + Wayland4.6 Mesa 24.2~git2406200600.0ac0fb~oibaf~n (git-0ac0fbc 2024-06-20 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57)GCC 13.2.0ext42560x1600ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelsDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionDddd BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - Platform Profile: balanced - CPU Microcode: 0xa704103 - ACPI Profile: balanced - BAR1 / Visible vRAM Size: 512 MB- OpenJDK Runtime Environment (build 21.0.4+7-Ubuntu-1ubuntu224.04)- Python 3.12.3- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Vulnerable: Safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected

ddddparaview: Many Spheres - 600 - 2560 x 1440paraview: Many Spheres - 600 - 2560 x 1440paraview: Many Spheres - 600 - 1920 x 1200paraview: Many Spheres - 600 - 1920 x 1200paraview: Many Spheres - 600 - 1920 x 1080paraview: Many Spheres - 600 - 1920 x 1080svt-av1: Preset 3 - Beauty 4K 10-bitcp2k: H20-256whisperfile: Mediumepoch: Conesvt-av1: Preset 3 - Bosphorus 4Ksvt-av1: Preset 5 - Beauty 4K 10-bitbyte: Whetstone Doublesvt-av1: Preset 3 - Bosphorus 1080pbyte: Pipebyte: System Callbyte: Dhrystone 2svt-av1: Preset 8 - Beauty 4K 10-bitwhisperfile: Smallastcenc: Very Thoroughastcenc: Exhaustivexnnpack: QS8MobileNetV2xnnpack: FP16MobileNetV3Smallxnnpack: FP16MobileNetV3Largexnnpack: FP16MobileNetV2xnnpack: FP16MobileNetV1xnnpack: FP32MobileNetV3Smallxnnpack: FP32MobileNetV3Largexnnpack: FP32MobileNetV2xnnpack: FP32MobileNetV1primesieve: 1e13svt-av1: Preset 13 - Beauty 4K 10-bitsvt-av1: Preset 5 - Bosphorus 4Kcp2k: Fayalite-FISTnamd: STMV with 1,066,628 Atomsstockfish: Chess Benchmarkcassandra: Writescp2k: H20-64onednn: Recurrent Neural Network Training - CPUsvt-av1: Preset 5 - Bosphorus 1080ponednn: Recurrent Neural Network Inference - CPUonnx: ResNet101_DUC_HDC-12 - CPU - Parallelonnx: ResNet101_DUC_HDC-12 - CPU - Parallelastcenc: Thoroughonnx: ResNet101_DUC_HDC-12 - CPU - Standardonnx: ResNet101_DUC_HDC-12 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: GPT-2 - CPU - Parallelonnx: GPT-2 - CPU - Parallelonnx: ZFNet-512 - CPU - Parallelonnx: ZFNet-512 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Standardonnx: bertsquad-12 - CPU - Parallelonnx: bertsquad-12 - CPU - Parallelonnx: yolov4 - CPU - Parallelonnx: yolov4 - CPU - Parallelonnx: ZFNet-512 - CPU - Standardonnx: ZFNet-512 - CPU - Standardonnx: T5 Encoder - CPU - Parallelonnx: T5 Encoder - CPU - Parallelonnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Parallelonnx: T5 Encoder - CPU - Standardonnx: T5 Encoder - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallelonnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallelwarpx: Plasma Accelerationonnx: CaffeNet 12-int8 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: super-resolution-10 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Standardwhisperfile: Tinysvt-av1: Preset 8 - Bosphorus 4Klitert: Inception V4litert: Inception ResNet V2litert: NASNet Mobilelitert: Quantized COCO SSD MobileNet v1litert: Mobilenet Floatlitert: DeepLab V3litert: SqueezeNetlitert: Mobilenet Quantnamd: ATPase with 327,506 Atomsunvanquished: 2560 x 1600 - Ultraunvanquished: 2560 x 1440 - Ultraunvanquished: 2560 x 1600 - Highastcenc: Fastunvanquished: 2560 x 1440 - Highunvanquished: 1920 x 1200 - Ultraunvanquished: 1920 x 1080 - Ultrawarpx: Uniform Plasmaencode-opus: WAV To Opus Encodeunvanquished: 2560 x 1600 - Mediumonednn: Deconvolution Batch shapes_1d - CPUunvanquished: 1920 x 1200 - Highunvanquished: 2560 x 1440 - Mediumunvanquished: 1920 x 1080 - Highsvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 4Kastcenc: Mediumunvanquished: 1920 x 1200 - Mediumunvanquished: 1920 x 1080 - Mediumprimesieve: 1e12onednn: IP Shapes 1D - CPUonednn: IP Shapes 3D - CPUsvt-av1: Preset 13 - Bosphorus 1080ponednn: Convolution Batch Shapes Auto - CPUonednn: Deconvolution Batch shapes_3d - CPUaaabd17.380.1717.7980.1817.8590.18233.9253.4266.5288.6385.3418.2336437.5371.2460.1511.8542.50.7051002.985655.14231546.934.1693.119155612.414.28516671392.516049531.4719195925.35.343229.325161.20750.7361533581142913371868465125510571536188.667.57915.057147.3090.383782115715711586299.8013155.6249.1581770.52142.570.4667288.815513700.7299251129.70.88519628.781.590379.39903106.35515.95862.65798.14774122.646109.6959.11603151.1196.6172211.523286.7717.12656140.288106.7729.3655163.372915.77947.718620.95567.02169142.36648.555120.594319.082152.398824.874440.198955.366150022.19001456.2712.00621498.194.31466231.7374.10903243.29314.016571.34148.7719113.98155.1882149.9123335828571.27642.712016.711527.482125.782288.031249.461.32956234.6254.7266.7170.7735289.6384.1417.321.1764507422.172336.26.10289439.3373.2473.3163.326121.14267.6726512.5539.815.3832.517144.57722497.58413.36154.869390.6741010.618657.63606554.794.0222.918137723.913.7961802739917367015.3774189636.14.909231.055721.11540.68375266061129510687623919989205698113914191.9576.8314.522154.8270.370072005717810874396.2873104.9247.1911727.882474.130.4041818.11922623.990.3810991309.840.763448743.5011.344989.56576104.49616.045262.31668.01188124.718117.9148.48061150.6876.6361616.535660.46837.29084137.129169.4495.9013863.708715.695649.363920.2576.72951148.55630.861732.400920.900847.83928.197335.46256.632417712.18961456.3351.9887502.5524.38853227.8314.4372225.29614.676168.13459.0054111.02658.155549.51933308.828254.4781622695.21491.332128.842286.411276.411.27220234253.9266.4156.3041289.8384.2416.321.6988584326.284336.66.13915438373.5471.6160.583119.37362.1679517.5542.415.342.541444.662489.20413.32624.929320.6191052.128690.37381549.163.7482.659125618.112.76916434910.215836969.9709502408.84.484243.974841.0170.6236581654160215332169515131911301617206.1196.22413.964151.9380.3764420267956100972100.0983379.9944.3321844.162807.930.3561347.41521635.390.6114751235.30.8095161256.20.796059.75407102.47317.623556.73638.22476121.495122.8758.13821174.1065.7435617.411857.42477.28642137.211114.5618.728879.178312.629255.129718.13856.78853147.26233.00230.299530.384932.907430.04333.283560.224981942.30027434.392.08476479.3854.79323208.5914.78049209.12116.324661.25459.80512101.97259.9008446.57536136.230683.28630.172338.081622.812378.542504.031458.151.28715235252.4266.9142.7008288.7380.8422.221.7064951928.9093376.58308436.4373.4472.3149.784114.04956.7288504.8540.716.7492.794184.66831460.21713.33055.72961OpenBenchmarking.org

ParaView

OpenBenchmarking.orgMiPolys / Sec, More Is BetterParaView 5.13Test: Many Spheres - Frames: 600 - Resolution: 2560 x 1440a4812162017.38

OpenBenchmarking.orgFrames / Sec, More Is BetterParaView 5.13Test: Many Spheres - Frames: 600 - Resolution: 2560 x 1440a0.03830.07660.11490.15320.19150.17

OpenBenchmarking.orgMiPolys / Sec, More Is BetterParaView 5.13Test: Many Spheres - Frames: 600 - Resolution: 1920 x 1200a4812162017.80

OpenBenchmarking.orgFrames / Sec, More Is BetterParaView 5.13Test: Many Spheres - Frames: 600 - Resolution: 1920 x 1200a0.04050.0810.12150.1620.20250.18

OpenBenchmarking.orgMiPolys / Sec, More Is BetterParaView 5.13Test: Many Spheres - Frames: 600 - Resolution: 1920 x 1080a4812162017.86

OpenBenchmarking.orgFrames / Sec, More Is BetterParaView 5.13Test: Many Spheres - Frames: 600 - Resolution: 1920 x 1080a0.04050.0810.12150.1620.20250.18

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 3 - Input: Beauty 4K 10-bitaabd0.15860.31720.47580.63440.7930.7050.6740.6191. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

CP2K Molecular Dynamics

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2024.3Input: H20-256aabd20040060080010001002.991010.621052.131. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

Whisperfile

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: Mediumaabd150300450600750655.14657.64690.37

Epoch

OpenBenchmarking.orgSeconds, Fewer Is BetterEpoch 4.19.4Epoch3D Deck: Coneaabd120240360480600546.93554.79549.161. (F9X) gfortran options: -O3 -std=f2003 -Jobj -lsdf -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 3 - Input: Bosphorus 4Kaabd0.9381.8762.8143.7524.694.1694.0223.7481. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 5 - Input: Beauty 4K 10-bitaabd0.70181.40362.10542.80723.5093.1192.9182.6591. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

BYTE Unix Benchmark

OpenBenchmarking.orgMWIPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Whetstone Doubleaabd30K60K90K120K150K155612.4137723.9125618.11. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 3 - Input: Bosphorus 1080paabd4812162014.2913.8012.771. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

BYTE Unix Benchmark

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Pipeaabd4M8M12M16M20M16671392.518027399.016434910.21. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: System Callaabd4M8M12M16M20M16049531.417367015.315836969.91. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Dhrystone 2aabd170M340M510M680M850M719195925.3774189636.1709502408.81. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 8 - Input: Beauty 4K 10-bitaabd1.20222.40443.60664.80886.0115.3434.9094.4841. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Whisperfile

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: Smallaabd50100150200250229.33231.06243.97

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Very Thoroughaabd0.27170.54340.81511.08681.35851.20751.11541.01701. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Exhaustiveaabd0.16560.33120.49680.66240.8280.73610.68370.62361. (CXX) g++ options: -O3 -flto -pthread

XNNPACK

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: QS8MobileNetV2aabd1302603905206505335265811. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV3Smallaabd1402804205607005816066541. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV3Largeaabd2K4K6K8K10K14291129516021. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV2aabd2K4K6K8K10K13371068715331. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV1aabd130026003900520065001868623921691. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV3Smallaabd40080012001600200046519985151. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV3Largeaabd2K4K6K8K10K1255920513191. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV2aabd150030004500600075001057698111301. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV1aabd3K6K9K12K15K15361391416171. (CXX) g++ options: -O3 -lrt -lm

Primesieve

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.5Length: 1e13aabd50100150200250188.66191.96206.121. (CXX) g++ options: -O3

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 13 - Input: Beauty 4K 10-bitaabd2468107.5796.8306.2241. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 5 - Input: Bosphorus 4Kaabd4812162015.0614.5213.961. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

CP2K Molecular Dynamics

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2024.3Input: Fayalite-FISTaabd306090120150147.31154.83151.941. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

NAMD

OpenBenchmarking.orgns/day, More Is BetterNAMD 3.0Input: STMV with 1,066,628 Atomsaabd0.08640.17280.25920.34560.4320.383780.370070.37644

Stockfish

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 17Chess Benchmarkaabd5M10M15M20M25M2115715720057178202679561. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -funroll-loops -msse -msse3 -mpopcnt -mavx2 -mbmi -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto-partition=one -flto=jobserver

Apache Cassandra

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 5.0Test: Writesaabd20K40K60K80K100K115862108743100972

CP2K Molecular Dynamics

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2024.3Input: H20-64aabd2040608010099.8096.29100.101. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Training - Engine: CPUaabd70014002100280035003155.623104.923379.99MIN: 3145.17MIN: 3084.26MIN: 3369.931. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 5 - Input: Bosphorus 1080paabd112233445549.1647.1944.331. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Inference - Engine: CPUaabd4008001200160020001770.501727.881844.16MIN: 1763.22MIN: 1720.93MIN: 1834.871. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

ONNX Runtime

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallelaabd60012001800240030002142.572474.132807.931. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Parallelaabd0.1050.210.3150.420.5250.4667280.4041810.3561341. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Thoroughaabd2468108.81558.11927.41521. (CXX) g++ options: -O3 -flto -pthread

ONNX Runtime

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standardaabd60012001800240030001370.002623.991635.391. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standardaabd0.16420.32840.49260.65680.8210.7299250.3810990.6114751. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: Parallelaabd300600900120015001129.701309.841235.301. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: Parallelaabd0.19920.39840.59760.79680.9960.8851900.7634480.8095161. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: Standardaabd30060090012001500628.78743.501256.201. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: Standardaabd0.35780.71561.07341.43121.7891.590371.344980.796051. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: GPT-2 - Device: CPU - Executor: Parallelaabd36912159.399039.565769.754071. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: GPT-2 - Device: CPU - Executor: Parallelaabd20406080100106.36104.50102.471. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ZFNet-512 - Device: CPU - Executor: Parallelaabd4812162015.9616.0517.621. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ZFNet-512 - Device: CPU - Executor: Parallelaabd142842567062.6662.3256.741. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: GPT-2 - Device: CPU - Executor: Standardaabd2468108.147748.011888.224761. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: GPT-2 - Device: CPU - Executor: Standardaabd306090120150122.65124.72121.501. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: bertsquad-12 - Device: CPU - Executor: Parallelaabd306090120150109.70117.91122.881. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: bertsquad-12 - Device: CPU - Executor: Parallelaabd36912159.116038.480618.138211. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: yolov4 - Device: CPU - Executor: Parallelaabd4080120160200151.12150.69174.111. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: yolov4 - Device: CPU - Executor: Parallelaabd2468106.617226.636165.743561. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ZFNet-512 - Device: CPU - Executor: Standardaabd4812162011.5216.5417.411. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ZFNet-512 - Device: CPU - Executor: Standardaabd2040608010086.7760.4757.421. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: T5 Encoder - Device: CPU - Executor: Parallelaabd2468107.126567.290847.286421. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: T5 Encoder - Device: CPU - Executor: Parallelaabd306090120150140.29137.13137.211. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: yolov4 - Device: CPU - Executor: Standardaabd4080120160200106.77169.45114.561. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: yolov4 - Device: CPU - Executor: Standardaabd36912159.365515.901388.728801. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: bertsquad-12 - Device: CPU - Executor: Standardaabd2040608010063.3763.7179.181. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: bertsquad-12 - Device: CPU - Executor: Standardaabd4812162015.7815.7012.631. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallelaabd122436486047.7249.3655.131. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ArcFace ResNet-100 - Device: CPU - Executor: Parallelaabd51015202520.9620.2618.141. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: T5 Encoder - Device: CPU - Executor: Standardaabd2468107.021696.729516.788531. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: T5 Encoder - Device: CPU - Executor: Standardaabd306090120150142.37148.56147.261. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ArcFace ResNet-100 - Device: CPU - Executor: Standardaabd112233445548.5630.8633.001. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ArcFace ResNet-100 - Device: CPU - Executor: Standardaabd81624324020.5932.4030.301. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standardaabd71421283519.0820.9030.381. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standardaabd122436486052.4047.8432.911. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallelaabd71421283524.8728.2030.041. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Parallelaabd91827364540.2035.4633.281. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

WarpX

OpenBenchmarking.orgSeconds, Fewer Is BetterWarpX 24.10Input: Plasma Accelerationaabd132639526555.3756.6360.221. (CXX) g++ options: -O3 -lm

ONNX Runtime

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallelaabd0.51761.03521.55282.07042.5882.190012.189612.300271. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: CaffeNet 12-int8 - Device: CPU - Executor: Parallelaabd100200300400500456.27456.34434.391. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: CaffeNet 12-int8 - Device: CPU - Executor: Standardaabd0.46910.93821.40731.87642.34552.006211.988702.084761. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: CaffeNet 12-int8 - Device: CPU - Executor: Standardaabd110220330440550498.19502.55479.391. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallelaabd1.07852.1573.23554.3145.39254.314664.388534.793231. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Parallelaabd50100150200250231.74227.83208.591. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standardaabd1.07562.15123.22684.30245.3784.109034.437204.780491. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standardaabd50100150200250243.29225.30209.121. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: Parallelaabd4812162014.0214.6816.321. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: Parallelaabd163248648071.3468.1361.251. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: Standardaabd36912158.771909.005409.805121. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: Standardaabd306090120150113.98111.03101.971. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

Whisperfile

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: Tinyaabd132639526555.1958.1659.90

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 8 - Input: Bosphorus 4Kaabd112233445549.9149.5246.581. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

LiteRT

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Inception V4aabd8K16K24K32K40K33358.033308.836136.2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Inception ResNet V2aabd7K14K21K28K35K28571.228254.430683.2

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: NASNet Mobileaabd2K4K6K8K10K7642.717816.008630.17

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Quantized COCO SSD MobileNet v1aabd5K10K15K20K25K2016.7122695.202338.08

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Mobilenet Floataabd300600900120015001527.481491.331622.81

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: DeepLab V3aabd50010001500200025002125.782128.842378.54

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: SqueezeNetaabd50010001500200025002288.032286.412504.03

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Mobilenet Quantaabd300600900120015001249.461276.411458.15

NAMD

OpenBenchmarking.orgns/day, More Is BetterNAMD 3.0Input: ATPase with 327,506 Atomsaabd0.29920.59840.89761.19681.4961.329561.272201.28715

Unvanquished

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 2560 x 1600 - Effects Quality: Ultraaaabd50100150200250233.9234.6234.0235.0

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 2560 x 1440 - Effects Quality: Ultraaaabd60120180240300253.4254.7253.9252.4

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 2560 x 1600 - Effects Quality: Highaaabd60120180240300266.5266.7266.4266.9

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Fastaabd4080120160200170.77156.30142.701. (CXX) g++ options: -O3 -flto -pthread

Unvanquished

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 2560 x 1440 - Effects Quality: Highaaabd60120180240300288.6289.6289.8288.7

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 1920 x 1200 - Effects Quality: Ultraaaabd80160240320400385.3384.1384.2380.8

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 1920 x 1080 - Effects Quality: Ultraaaabd90180270360450418.2417.3416.3422.2

WarpX

OpenBenchmarking.orgSeconds, Fewer Is BetterWarpX 24.10Input: Uniform Plasmaaabd51015202521.1821.7021.711. (CXX) g++ options: -O3 -lm

Opus Codec Encoding

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.5.2WAV To Opus Encodeaabd71421283522.1726.2828.911. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

Unvanquished

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 2560 x 1600 - Effects Quality: Mediumaaabd70140210280350336.0336.2336.6337.0

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_1d - Engine: CPUaabd2468106.102896.139156.58308MIN: 4.53MIN: 4.76MIN: 5.221. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

Unvanquished

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 1920 x 1200 - Effects Quality: Highaaabd100200300400500437.5439.3438.0436.4

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 2560 x 1440 - Effects Quality: Mediumaaabd80160240320400371.2373.2373.5373.4

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 1920 x 1080 - Effects Quality: Highaaabd100200300400500460.1473.3471.6472.3

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 8 - Input: Bosphorus 1080paabd4080120160200163.33160.58149.781. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 13 - Input: Bosphorus 4Kaabd306090120150121.14119.37114.051. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Mediumaabd153045607567.6762.1756.731. (CXX) g++ options: -O3 -flto -pthread

Unvanquished

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 1920 x 1200 - Effects Quality: Mediumaaabd110220330440550511.8512.5517.5504.8

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 1920 x 1080 - Effects Quality: Mediumaaabd120240360480600542.5539.8542.4540.7

Primesieve

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.5Length: 1e12aabd4812162015.3815.3416.751. (CXX) g++ options: -O3

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 1D - Engine: CPUaabd0.62871.25741.88612.51483.14352.517142.541442.79418MIN: 2.4MIN: 2.43MIN: 2.661. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 3D - Engine: CPUaabd1.05042.10083.15124.20165.2524.577224.662004.66831MIN: 4.47MIN: 4.52MIN: 4.521. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 13 - Input: Bosphorus 1080paabd110220330440550497.58489.20460.221. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Convolution Batch Shapes Auto - Engine: CPUaabd369121513.3613.3313.33MIN: 13.14MIN: 13.14MIN: 13.111. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_3d - Engine: CPUaabd1.28922.57843.86765.15686.4464.869394.929325.72961MIN: 4.64MIN: 4.92MIN: 5.661. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

127 Results Shown

ParaView:
  Many Spheres - 600 - 2560 x 1440:
    MiPolys / Sec
    Frames / Sec
  Many Spheres - 600 - 1920 x 1200:
    MiPolys / Sec
    Frames / Sec
  Many Spheres - 600 - 1920 x 1080:
    MiPolys / Sec
    Frames / Sec
SVT-AV1
CP2K Molecular Dynamics
Whisperfile
Epoch
SVT-AV1:
  Preset 3 - Bosphorus 4K
  Preset 5 - Beauty 4K 10-bit
BYTE Unix Benchmark
SVT-AV1
BYTE Unix Benchmark:
  Pipe
  System Call
  Dhrystone 2
SVT-AV1
Whisperfile
ASTC Encoder:
  Very Thorough
  Exhaustive
XNNPACK:
  QS8MobileNetV2
  FP16MobileNetV3Small
  FP16MobileNetV3Large
  FP16MobileNetV2
  FP16MobileNetV1
  FP32MobileNetV3Small
  FP32MobileNetV3Large
  FP32MobileNetV2
  FP32MobileNetV1
Primesieve
SVT-AV1:
  Preset 13 - Beauty 4K 10-bit
  Preset 5 - Bosphorus 4K
CP2K Molecular Dynamics
NAMD
Stockfish
Apache Cassandra
CP2K Molecular Dynamics
oneDNN
SVT-AV1
oneDNN
ONNX Runtime:
  ResNet101_DUC_HDC-12 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
ASTC Encoder
ONNX Runtime:
  ResNet101_DUC_HDC-12 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  fcn-resnet101-11 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  fcn-resnet101-11 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  GPT-2 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  ZFNet-512 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  GPT-2 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  bertsquad-12 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  yolov4 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  ZFNet-512 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  T5 Encoder - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  yolov4 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  bertsquad-12 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ArcFace ResNet-100 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  T5 Encoder - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ArcFace ResNet-100 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  Faster R-CNN R-50-FPN-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  Faster R-CNN R-50-FPN-int8 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
WarpX
ONNX Runtime:
  CaffeNet 12-int8 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  CaffeNet 12-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ResNet50 v1-12-int8 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  ResNet50 v1-12-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  super-resolution-10 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  super-resolution-10 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
Whisperfile
SVT-AV1
LiteRT:
  Inception V4
  Inception ResNet V2
  NASNet Mobile
  Quantized COCO SSD MobileNet v1
  Mobilenet Float
  DeepLab V3
  SqueezeNet
  Mobilenet Quant
NAMD
Unvanquished:
  2560 x 1600 - Ultra
  2560 x 1440 - Ultra
  2560 x 1600 - High
ASTC Encoder
Unvanquished:
  2560 x 1440 - High
  1920 x 1200 - Ultra
  1920 x 1080 - Ultra
WarpX
Opus Codec Encoding
Unvanquished
oneDNN
Unvanquished:
  1920 x 1200 - High
  2560 x 1440 - Medium
  1920 x 1080 - High
SVT-AV1:
  Preset 8 - Bosphorus 1080p
  Preset 13 - Bosphorus 4K
ASTC Encoder
Unvanquished:
  1920 x 1200 - Medium
  1920 x 1080 - Medium
Primesieve
oneDNN:
  IP Shapes 1D - CPU
  IP Shapes 3D - CPU
SVT-AV1
oneDNN:
  Convolution Batch Shapes Auto - CPU
  Deconvolution Batch shapes_3d - CPU