dddd

AMD Ryzen 7 7840HS testing with a Framework Laptop 16 (AMD Ryzen 7040 ) FRANMZCP07 (03.01 BIOS) and AMD Radeon 780M 512MB on Ubuntu 24.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2411045-NE-DDDD1454570
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
Performance Per
Dollar
Date
Run
  Test
  Duration
a
November 03
  5 Hours, 52 Minutes
aa
November 04
  4 Hours, 2 Minutes
b
November 04
  4 Hours, 14 Minutes
d
November 04
  4 Hours, 13 Minutes
Invert Behavior (Only Show Selected Data)
  4 Hours, 35 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


ddddOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 7 7840HS @ 5.29GHz (8 Cores / 16 Threads)Framework Laptop 16 (AMD Ryzen 7040 ) FRANMZCP07 (03.01 BIOS)AMD Device 14e82 x 8GB DDR5-5600MT/s A-DATA AD5S56008G-B512GB Western Digital PC SN810 SDCPNRY-512GAMD Radeon 780M 512MBAMD Navi 31 HDMI/DPMEDIATEK MT7922 802.11ax PCIUbuntu 24.046.8.0-39-generic (x86_64)6.8.0-40-generic (x86_64)GNOME Shell 46.0X Server + Wayland4.6 Mesa 24.2~git2406200600.0ac0fb~oibaf~n (git-0ac0fbc 2024-06-20 noble-oibaf-ppa) (LLVM 17.0.6 DRM 3.57)GCC 13.2.0ext42560x1600ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelsDesktopDisplay ServerOpenGLCompilerFile-SystemScreen ResolutionDddd BenchmarksSystem Logs- Transparent Huge Pages: madvise- --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-backtrace --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-13-uJ7kn6/gcc-13-13.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: amd-pstate-epp powersave (EPP: balance_performance) - Platform Profile: balanced - CPU Microcode: 0xa704103 - ACPI Profile: balanced - BAR1 / Visible vRAM Size: 512 MB- OpenJDK Runtime Environment (build 21.0.4+7-Ubuntu-1ubuntu224.04)- Python 3.12.3- gather_data_sampling: Not affected + itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + reg_file_data_sampling: Not affected + retbleed: Not affected + spec_rstack_overflow: Vulnerable: Safe RET no microcode + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced / Automatic IBRS; IBPB: conditional; STIBP: always-on; RSB filling; PBRSB-eIBRS: Not affected; BHI: Not affected + srbds: Not affected + tsx_async_abort: Not affected

ddddparaview: Many Spheres - 600 - 2560 x 1440paraview: Many Spheres - 600 - 2560 x 1440paraview: Many Spheres - 600 - 1920 x 1200paraview: Many Spheres - 600 - 1920 x 1200paraview: Many Spheres - 600 - 1920 x 1080paraview: Many Spheres - 600 - 1920 x 1080svt-av1: Preset 3 - Beauty 4K 10-bitcp2k: H20-256whisperfile: Mediumepoch: Conesvt-av1: Preset 3 - Bosphorus 4Ksvt-av1: Preset 5 - Beauty 4K 10-bitbyte: Whetstone Doublesvt-av1: Preset 3 - Bosphorus 1080pbyte: Pipebyte: System Callbyte: Dhrystone 2svt-av1: Preset 8 - Beauty 4K 10-bitwhisperfile: Smallastcenc: Very Thoroughastcenc: Exhaustivexnnpack: QS8MobileNetV2xnnpack: FP16MobileNetV3Smallxnnpack: FP16MobileNetV3Largexnnpack: FP16MobileNetV2xnnpack: FP16MobileNetV1xnnpack: FP32MobileNetV3Smallxnnpack: FP32MobileNetV3Largexnnpack: FP32MobileNetV2xnnpack: FP32MobileNetV1primesieve: 1e13svt-av1: Preset 13 - Beauty 4K 10-bitsvt-av1: Preset 5 - Bosphorus 4Kcp2k: Fayalite-FISTnamd: STMV with 1,066,628 Atomsstockfish: Chess Benchmarkcassandra: Writescp2k: H20-64onednn: Recurrent Neural Network Training - CPUsvt-av1: Preset 5 - Bosphorus 1080ponednn: Recurrent Neural Network Inference - CPUonnx: ResNet101_DUC_HDC-12 - CPU - Parallelonnx: ResNet101_DUC_HDC-12 - CPU - Parallelastcenc: Thoroughonnx: ResNet101_DUC_HDC-12 - CPU - Standardonnx: ResNet101_DUC_HDC-12 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Standardonnx: GPT-2 - CPU - Parallelonnx: GPT-2 - CPU - Parallelonnx: ZFNet-512 - CPU - Parallelonnx: ZFNet-512 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: GPT-2 - CPU - Standardonnx: bertsquad-12 - CPU - Parallelonnx: bertsquad-12 - CPU - Parallelonnx: yolov4 - CPU - Parallelonnx: yolov4 - CPU - Parallelonnx: ZFNet-512 - CPU - Standardonnx: ZFNet-512 - CPU - Standardonnx: T5 Encoder - CPU - Parallelonnx: T5 Encoder - CPU - Parallelonnx: yolov4 - CPU - Standardonnx: yolov4 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: bertsquad-12 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Parallelonnx: T5 Encoder - CPU - Standardonnx: T5 Encoder - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Standardonnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallelonnx: Faster R-CNN R-50-FPN-int8 - CPU - Parallelwarpx: Plasma Accelerationonnx: CaffeNet 12-int8 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Parallelonnx: CaffeNet 12-int8 - CPU - Standardonnx: CaffeNet 12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Parallelonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: ResNet50 v1-12-int8 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: super-resolution-10 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardonnx: super-resolution-10 - CPU - Standardwhisperfile: Tinysvt-av1: Preset 8 - Bosphorus 4Klitert: Inception V4litert: Inception ResNet V2litert: NASNet Mobilelitert: Quantized COCO SSD MobileNet v1litert: Mobilenet Floatlitert: DeepLab V3litert: SqueezeNetlitert: Mobilenet Quantnamd: ATPase with 327,506 Atomsunvanquished: 2560 x 1600 - Ultraunvanquished: 2560 x 1440 - Ultraunvanquished: 2560 x 1600 - Highastcenc: Fastunvanquished: 2560 x 1440 - Highunvanquished: 1920 x 1200 - Ultraunvanquished: 1920 x 1080 - Ultrawarpx: Uniform Plasmaencode-opus: WAV To Opus Encodeunvanquished: 2560 x 1600 - Mediumonednn: Deconvolution Batch shapes_1d - CPUunvanquished: 1920 x 1200 - Highunvanquished: 2560 x 1440 - Mediumunvanquished: 1920 x 1080 - Highsvt-av1: Preset 8 - Bosphorus 1080psvt-av1: Preset 13 - Bosphorus 4Kastcenc: Mediumunvanquished: 1920 x 1200 - Mediumunvanquished: 1920 x 1080 - Mediumprimesieve: 1e12onednn: IP Shapes 1D - CPUonednn: IP Shapes 3D - CPUsvt-av1: Preset 13 - Bosphorus 1080ponednn: Convolution Batch Shapes Auto - CPUonednn: Deconvolution Batch shapes_3d - CPUaaabd17.380.1717.7980.1817.8590.18233.9253.4266.5288.6385.3418.2336437.5371.2460.1511.8542.50.7051002.985655.14231546.934.1693.119155612.414.28516671392.516049531.4719195925.35.343229.325161.20750.7361533581142913371868465125510571536188.667.57915.057147.3090.383782115715711586299.8013155.6249.1581770.52142.570.4667288.815513700.7299251129.70.88519628.781.590379.39903106.35515.95862.65798.14774122.646109.6959.11603151.1196.6172211.523286.7717.12656140.288106.7729.3655163.372915.77947.718620.95567.02169142.36648.555120.594319.082152.398824.874440.198955.366150022.19001456.2712.00621498.194.31466231.7374.10903243.29314.016571.34148.7719113.98155.1882149.9123335828571.27642.712016.711527.482125.782288.031249.461.32956234.6254.7266.7170.7735289.6384.1417.321.1764507422.172336.26.10289439.3373.2473.3163.326121.14267.6726512.5539.815.3832.517144.57722497.58413.36154.869390.6741010.618657.63606554.794.0222.918137723.913.7961802739917367015.3774189636.14.909231.055721.11540.68375266061129510687623919989205698113914191.9576.8314.522154.8270.370072005717810874396.2873104.9247.1911727.882474.130.4041818.11922623.990.3810991309.840.763448743.5011.344989.56576104.49616.045262.31668.01188124.718117.9148.48061150.6876.6361616.535660.46837.29084137.129169.4495.9013863.708715.695649.363920.2576.72951148.55630.861732.400920.900847.83928.197335.46256.632417712.18961456.3351.9887502.5524.38853227.8314.4372225.29614.676168.13459.0054111.02658.155549.51933308.828254.4781622695.21491.332128.842286.411276.411.27220234253.9266.4156.3041289.8384.2416.321.6988584326.284336.66.13915438373.5471.6160.583119.37362.1679517.5542.415.342.541444.662489.20413.32624.929320.6191052.128690.37381549.163.7482.659125618.112.76916434910.215836969.9709502408.84.484243.974841.0170.6236581654160215332169515131911301617206.1196.22413.964151.9380.3764420267956100972100.0983379.9944.3321844.162807.930.3561347.41521635.390.6114751235.30.8095161256.20.796059.75407102.47317.623556.73638.22476121.495122.8758.13821174.1065.7435617.411857.42477.28642137.211114.5618.728879.178312.629255.129718.13856.78853147.26233.00230.299530.384932.907430.04333.283560.224981942.30027434.392.08476479.3854.79323208.5914.78049209.12116.324661.25459.80512101.97259.9008446.57536136.230683.28630.172338.081622.812378.542504.031458.151.28715235252.4266.9142.7008288.7380.8422.221.7064951928.9093376.58308436.4373.4472.3149.784114.04956.7288504.8540.716.7492.794184.66831460.21713.33055.72961OpenBenchmarking.org

ParaView

OpenBenchmarking.orgMiPolys / Sec, More Is BetterParaView 5.13Test: Many Spheres - Frames: 600 - Resolution: 2560 x 1440a4812162017.38

OpenBenchmarking.orgFrames / Sec, More Is BetterParaView 5.13Test: Many Spheres - Frames: 600 - Resolution: 2560 x 1440a0.03830.07660.11490.15320.19150.17

OpenBenchmarking.orgMiPolys / Sec, More Is BetterParaView 5.13Test: Many Spheres - Frames: 600 - Resolution: 1920 x 1200a4812162017.80

OpenBenchmarking.orgFrames / Sec, More Is BetterParaView 5.13Test: Many Spheres - Frames: 600 - Resolution: 1920 x 1200a0.04050.0810.12150.1620.20250.18

OpenBenchmarking.orgMiPolys / Sec, More Is BetterParaView 5.13Test: Many Spheres - Frames: 600 - Resolution: 1920 x 1080a4812162017.86

OpenBenchmarking.orgFrames / Sec, More Is BetterParaView 5.13Test: Many Spheres - Frames: 600 - Resolution: 1920 x 1080a0.04050.0810.12150.1620.20250.18

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 3 - Input: Beauty 4K 10-bitdbaa0.15860.31720.47580.63440.7930.6190.6740.7051. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

CP2K Molecular Dynamics

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2024.3Input: H20-256dbaa20040060080010001052.131010.621002.991. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

Whisperfile

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: Mediumdbaa150300450600750690.37657.64655.14

Epoch

OpenBenchmarking.orgSeconds, Fewer Is BetterEpoch 4.19.4Epoch3D Deck: Conebdaa120240360480600554.79549.16546.931. (F9X) gfortran options: -O3 -std=f2003 -Jobj -lsdf -lmpi_usempif08 -lmpi_mpifh -lmpi -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm -lz

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 3 - Input: Bosphorus 4Kdbaa0.9381.8762.8143.7524.693.7484.0224.1691. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 5 - Input: Beauty 4K 10-bitdbaa0.70181.40362.10542.80723.5092.6592.9183.1191. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

BYTE Unix Benchmark

OpenBenchmarking.orgMWIPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Whetstone Doubledbaa30K60K90K120K150K125618.1137723.9155612.41. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 3 - Input: Bosphorus 1080pdbaa4812162012.7713.8014.291. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

BYTE Unix Benchmark

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Pipedaab4M8M12M16M20M16434910.216671392.518027399.01. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: System Calldaab4M8M12M16M20M15836969.916049531.417367015.31. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

OpenBenchmarking.orgLPS, More Is BetterBYTE Unix Benchmark 5.1.3-gitComputational Test: Dhrystone 2daab170M340M510M680M850M709502408.8719195925.3774189636.11. (CC) gcc options: -pedantic -O3 -ffast-math -march=native -mtune=native -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 8 - Input: Beauty 4K 10-bitdbaa1.20222.40443.60664.80886.0114.4844.9095.3431. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

Whisperfile

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: Smalldbaa50100150200250243.97231.06229.33

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Very Thoroughdbaa0.27170.54340.81511.08681.35851.01701.11541.20751. (CXX) g++ options: -O3 -flto -pthread

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Exhaustivedbaa0.16560.33120.49680.66240.8280.62360.68370.73611. (CXX) g++ options: -O3 -flto -pthread

XNNPACK

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: QS8MobileNetV2daab1302603905206505815335261. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV3Smalldbaa1402804205607006546065811. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV3Largebdaa2K4K6K8K10K11295160214291. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV2bdaa2K4K6K8K10K10687153313371. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP16MobileNetV1bdaa130026003900520065006239216918681. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV3Smallbdaa40080012001600200019985154651. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV3Largebdaa2K4K6K8K10K9205131912551. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV2bdaa150030004500600075006981113010571. (CXX) g++ options: -O3 -lrt -lm

OpenBenchmarking.orgus, Fewer Is BetterXNNPACK b7b048Model: FP32MobileNetV1bdaa3K6K9K12K15K13914161715361. (CXX) g++ options: -O3 -lrt -lm

Primesieve

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.5Length: 1e13dbaa50100150200250206.12191.96188.661. (CXX) g++ options: -O3

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 13 - Input: Beauty 4K 10-bitdbaa2468106.2246.8307.5791. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 5 - Input: Bosphorus 4Kdbaa4812162013.9614.5215.061. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

CP2K Molecular Dynamics

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2024.3Input: Fayalite-FISTbdaa306090120150154.83151.94147.311. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

NAMD

OpenBenchmarking.orgns/day, More Is BetterNAMD 3.0Input: STMV with 1,066,628 Atomsbdaa0.08640.17280.25920.34560.4320.370070.376440.38378

Stockfish

OpenBenchmarking.orgNodes Per Second, More Is BetterStockfish 17Chess Benchmarkbdaa5M10M15M20M25M2005717820267956211571571. (CXX) g++ options: -lgcov -m64 -lpthread -fno-exceptions -std=c++17 -fno-peel-loops -fno-tracer -pedantic -O3 -funroll-loops -msse -msse3 -mpopcnt -mavx2 -mbmi -mavx512f -mavx512bw -mavx512vnni -mavx512dq -mavx512vl -msse4.1 -mssse3 -msse2 -mbmi2 -flto -flto-partition=one -flto=jobserver

Apache Cassandra

OpenBenchmarking.orgOp/s, More Is BetterApache Cassandra 5.0Test: Writesdbaa20K40K60K80K100K100972108743115862

CP2K Molecular Dynamics

OpenBenchmarking.orgSeconds, Fewer Is BetterCP2K Molecular Dynamics 2024.3Input: H20-64daab20406080100100.1099.8096.291. (F9X) gfortran options: -fopenmp -march=native -mtune=native -O3 -funroll-loops -fbacktrace -ffree-form -fimplicit-none -std=f2008 -lcp2kstart -lcp2kmc -lcp2kswarm -lcp2kmotion -lcp2kthermostat -lcp2kemd -lcp2ktmc -lcp2kmain -lcp2kdbt -lcp2ktas -lcp2kgrid -lcp2kgriddgemm -lcp2kgridcpu -lcp2kgridref -lcp2kgridcommon -ldbcsrarnoldi -ldbcsrx -lcp2kdbx -lcp2kdbm -lcp2kshg_int -lcp2keri_mme -lcp2kminimax -lcp2khfxbase -lcp2ksubsys -lcp2kxc -lcp2kao -lcp2kpw_env -lcp2kinput -lcp2kpw -lcp2kgpu -lcp2kfft -lcp2kfpga -lcp2kfm -lcp2kcommon -lcp2koffload -lcp2kmpiwrap -lcp2kbase -ldbcsr -lsirius -lspla -lspfft -lsymspg -lvdwxc -l:libhdf5_fortran.a -l:libhdf5.a -lz -lgsl -lelpa_openmp -lcosma -lcosta -lscalapack -lxsmmf -lxsmm -ldl -lpthread -llibgrpp -lxcf03 -lxc -lint2 -lfftw3_mpi -lfftw3 -lfftw3_omp -lmpi_cxx -lmpi -l:libopenblas.a -lvori -lstdc++ -lmpi_usempif08 -lmpi_mpifh -lopen-rte -lopen-pal -lhwloc -levent_core -levent_pthreads -lm

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Training - Engine: CPUdaab70014002100280035003379.993155.623104.92MIN: 3369.93MIN: 3145.17MIN: 3084.261. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 5 - Input: Bosphorus 1080pdbaa112233445544.3347.1949.161. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Recurrent Neural Network Inference - Engine: CPUdaab4008001200160020001844.161770.501727.88MIN: 1834.87MIN: 1763.22MIN: 1720.931. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

ONNX Runtime

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Paralleldbaa60012001800240030002807.932474.132142.571. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Paralleldbaa0.1050.210.3150.420.5250.3561340.4041810.4667281. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Thoroughdbaa2468107.41528.11928.81551. (CXX) g++ options: -O3 -flto -pthread

ONNX Runtime

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standardbdaa60012001800240030002623.991635.391370.001. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet101_DUC_HDC-12 - Device: CPU - Executor: Standardbdaa0.16420.32840.49260.65680.8210.3810990.6114750.7299251. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: Parallelbdaa300600900120015001309.841235.301129.701. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: Parallelbdaa0.19920.39840.59760.79680.9960.7634480.8095160.8851901. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: Standarddbaa300600900120015001256.20743.50628.781. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: fcn-resnet101-11 - Device: CPU - Executor: Standarddbaa0.35780.71561.07341.43121.7890.796051.344981.590371. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: GPT-2 - Device: CPU - Executor: Paralleldbaa36912159.754079.565769.399031. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: GPT-2 - Device: CPU - Executor: Paralleldbaa20406080100102.47104.50106.361. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ZFNet-512 - Device: CPU - Executor: Paralleldbaa4812162017.6216.0515.961. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ZFNet-512 - Device: CPU - Executor: Paralleldbaa142842567056.7462.3262.661. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: GPT-2 - Device: CPU - Executor: Standarddaab2468108.224768.147748.011881. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: GPT-2 - Device: CPU - Executor: Standarddaab306090120150121.50122.65124.721. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: bertsquad-12 - Device: CPU - Executor: Paralleldbaa306090120150122.88117.91109.701. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: bertsquad-12 - Device: CPU - Executor: Paralleldbaa36912158.138218.480619.116031. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: yolov4 - Device: CPU - Executor: Paralleldaab4080120160200174.11151.12150.691. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: yolov4 - Device: CPU - Executor: Paralleldaab2468105.743566.617226.636161. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ZFNet-512 - Device: CPU - Executor: Standarddbaa4812162017.4116.5411.521. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ZFNet-512 - Device: CPU - Executor: Standarddbaa2040608010057.4260.4786.771. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: T5 Encoder - Device: CPU - Executor: Parallelbdaa2468107.290847.286427.126561. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: T5 Encoder - Device: CPU - Executor: Parallelbdaa306090120150137.13137.21140.291. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: yolov4 - Device: CPU - Executor: Standardbdaa4080120160200169.45114.56106.771. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: yolov4 - Device: CPU - Executor: Standardbdaa36912155.901388.728809.365511. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: bertsquad-12 - Device: CPU - Executor: Standarddbaa2040608010079.1863.7163.371. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: bertsquad-12 - Device: CPU - Executor: Standarddbaa4812162012.6315.7015.781. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ArcFace ResNet-100 - Device: CPU - Executor: Paralleldbaa122436486055.1349.3647.721. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ArcFace ResNet-100 - Device: CPU - Executor: Paralleldbaa51015202518.1420.2620.961. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: T5 Encoder - Device: CPU - Executor: Standardaadb2468107.021696.788536.729511. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: T5 Encoder - Device: CPU - Executor: Standardaadb306090120150142.37147.26148.561. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ArcFace ResNet-100 - Device: CPU - Executor: Standardaadb112233445548.5633.0030.861. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ArcFace ResNet-100 - Device: CPU - Executor: Standardaadb81624324020.5930.3032.401. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standarddbaa71421283530.3820.9019.081. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Standarddbaa122436486032.9147.8452.401. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Paralleldbaa71421283530.0428.2024.871. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: Faster R-CNN R-50-FPN-int8 - Device: CPU - Executor: Paralleldbaa91827364533.2835.4640.201. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

WarpX

OpenBenchmarking.orgSeconds, Fewer Is BetterWarpX 24.10Input: Plasma Accelerationdbaa132639526560.2256.6355.371. (CXX) g++ options: -O3 -lm

ONNX Runtime

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: CaffeNet 12-int8 - Device: CPU - Executor: Paralleldaab0.51761.03521.55282.07042.5882.300272.190012.189611. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: CaffeNet 12-int8 - Device: CPU - Executor: Paralleldaab100200300400500434.39456.27456.341. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: CaffeNet 12-int8 - Device: CPU - Executor: Standarddaab0.46910.93821.40731.87642.34552.084762.006211.988701. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: CaffeNet 12-int8 - Device: CPU - Executor: Standarddaab110220330440550479.39498.19502.551. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Paralleldbaa1.07852.1573.23554.3145.39254.793234.388534.314661. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Paralleldbaa50100150200250208.59227.83231.741. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standarddbaa1.07562.15123.22684.30245.3784.780494.437204.109031. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: ResNet50 v1-12-int8 - Device: CPU - Executor: Standarddbaa50100150200250209.12225.30243.291. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: Paralleldbaa4812162016.3214.6814.021. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: Paralleldbaa163248648061.2568.1371.341. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInference Time Cost (ms), Fewer Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: Standarddbaa36912159.805129.005408.771901. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Second, More Is BetterONNX Runtime 1.19Model: super-resolution-10 - Device: CPU - Executor: Standarddbaa306090120150101.97111.03113.981. (CXX) g++ options: -O3 -march=native -ffunction-sections -fdata-sections -mtune=native -flto=auto -fno-fat-lto-objects -ldl -lrt

Whisperfile

OpenBenchmarking.orgSeconds, Fewer Is BetterWhisperfile 20Aug24Model Size: Tinydbaa132639526559.9058.1655.19

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 8 - Input: Bosphorus 4Kdbaa112233445546.5849.5249.911. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

LiteRT

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Inception V4daab8K16K24K32K40K36136.233358.033308.8

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Inception ResNet V2daab7K14K21K28K35K30683.228571.228254.4

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: NASNet Mobiledbaa2K4K6K8K10K8630.177816.007642.71

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Quantized COCO SSD MobileNet v1bdaa5K10K15K20K25K22695.202338.082016.71

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Mobilenet Floatdaab300600900120015001622.811527.481491.33

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: DeepLab V3dbaa50010001500200025002378.542128.842125.78

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: SqueezeNetdaab50010001500200025002504.032288.032286.41

OpenBenchmarking.orgMicroseconds, Fewer Is BetterLiteRT 2024-10-15Model: Mobilenet Quantdbaa300600900120015001458.151276.411249.46

NAMD

OpenBenchmarking.orgns/day, More Is BetterNAMD 3.0Input: ATPase with 327,506 Atomsbdaa0.29920.59840.89761.19681.4961.272201.287151.32956

Unvanquished

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 2560 x 1600 - Effects Quality: Ultraabaad50100150200250233.9234.0234.6235.0

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 2560 x 1440 - Effects Quality: Ultradabaa60120180240300252.4253.4253.9254.7

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 2560 x 1600 - Effects Quality: Highbaaad60120180240300266.4266.5266.7266.9

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Fastdbaa4080120160200142.70156.30170.771. (CXX) g++ options: -O3 -flto -pthread

Unvanquished

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 2560 x 1440 - Effects Quality: Highadaab60120180240300288.6288.7289.6289.8

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 1920 x 1200 - Effects Quality: Ultradaaba80160240320400380.8384.1384.2385.3

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 1920 x 1080 - Effects Quality: Ultrabaaad90180270360450416.3417.3418.2422.2

WarpX

OpenBenchmarking.orgSeconds, Fewer Is BetterWarpX 24.10Input: Uniform Plasmadbaa51015202521.7121.7021.181. (CXX) g++ options: -O3 -lm

Opus Codec Encoding

OpenBenchmarking.orgSeconds, Fewer Is BetterOpus Codec Encoding 1.5.2WAV To Opus Encodedbaa71421283528.9126.2822.171. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm

Unvanquished

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 2560 x 1600 - Effects Quality: Mediumaaabd70140210280350336.0336.2336.6337.0

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_1d - Engine: CPUdbaa2468106.583086.139156.10289MIN: 5.22MIN: 4.76MIN: 4.531. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

Unvanquished

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 1920 x 1200 - Effects Quality: Highdabaa100200300400500436.4437.5438.0439.3

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 2560 x 1440 - Effects Quality: Mediumaaadb80160240320400371.2373.2373.4373.5

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 1920 x 1080 - Effects Quality: Highabdaa100200300400500460.1471.6472.3473.3

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 8 - Input: Bosphorus 1080pdbaa4080120160200149.78160.58163.331. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 13 - Input: Bosphorus 4Kdbaa306090120150114.05119.37121.141. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

ASTC Encoder

OpenBenchmarking.orgMT/s, More Is BetterASTC Encoder 5.0Preset: Mediumdbaa153045607556.7362.1767.671. (CXX) g++ options: -O3 -flto -pthread

Unvanquished

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 1920 x 1200 - Effects Quality: Mediumdaaab110220330440550504.8511.8512.5517.5

OpenBenchmarking.orgFrames Per Second, More Is BetterUnvanquished 0.55Resolution: 1920 x 1080 - Effects Quality: Mediumaadba120240360480600539.8540.7542.4542.5

Primesieve

OpenBenchmarking.orgSeconds, Fewer Is BetterPrimesieve 12.5Length: 1e12daab4812162016.7515.3815.341. (CXX) g++ options: -O3

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 1D - Engine: CPUdbaa0.62871.25741.88612.51483.14352.794182.541442.51714MIN: 2.66MIN: 2.43MIN: 2.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: IP Shapes 3D - Engine: CPUdbaa1.05042.10083.15124.20165.2524.668314.662004.57722MIN: 4.52MIN: 4.52MIN: 4.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 2.3Encoder Mode: Preset 13 - Input: Bosphorus 1080pdbaa110220330440550460.22489.20497.581. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Convolution Batch Shapes Auto - Engine: CPUaadb369121513.3613.3313.33MIN: 13.14MIN: 13.11MIN: 13.141. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.6Harness: Deconvolution Batch shapes_3d - Engine: CPUdbaa1.28922.57843.86765.15686.4465.729614.929324.86939MIN: 5.66MIN: 4.92MIN: 4.641. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -fcf-protection=full -pie -ldl

127 Results Shown

ParaView:
  Many Spheres - 600 - 2560 x 1440:
    MiPolys / Sec
    Frames / Sec
  Many Spheres - 600 - 1920 x 1200:
    MiPolys / Sec
    Frames / Sec
  Many Spheres - 600 - 1920 x 1080:
    MiPolys / Sec
    Frames / Sec
SVT-AV1
CP2K Molecular Dynamics
Whisperfile
Epoch
SVT-AV1:
  Preset 3 - Bosphorus 4K
  Preset 5 - Beauty 4K 10-bit
BYTE Unix Benchmark
SVT-AV1
BYTE Unix Benchmark:
  Pipe
  System Call
  Dhrystone 2
SVT-AV1
Whisperfile
ASTC Encoder:
  Very Thorough
  Exhaustive
XNNPACK:
  QS8MobileNetV2
  FP16MobileNetV3Small
  FP16MobileNetV3Large
  FP16MobileNetV2
  FP16MobileNetV1
  FP32MobileNetV3Small
  FP32MobileNetV3Large
  FP32MobileNetV2
  FP32MobileNetV1
Primesieve
SVT-AV1:
  Preset 13 - Beauty 4K 10-bit
  Preset 5 - Bosphorus 4K
CP2K Molecular Dynamics
NAMD
Stockfish
Apache Cassandra
CP2K Molecular Dynamics
oneDNN
SVT-AV1
oneDNN
ONNX Runtime:
  ResNet101_DUC_HDC-12 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
ASTC Encoder
ONNX Runtime:
  ResNet101_DUC_HDC-12 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  fcn-resnet101-11 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  fcn-resnet101-11 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  GPT-2 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  ZFNet-512 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  GPT-2 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  bertsquad-12 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  yolov4 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  ZFNet-512 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  T5 Encoder - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  yolov4 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  bertsquad-12 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ArcFace ResNet-100 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  T5 Encoder - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ArcFace ResNet-100 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  Faster R-CNN R-50-FPN-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  Faster R-CNN R-50-FPN-int8 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
WarpX
ONNX Runtime:
  CaffeNet 12-int8 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  CaffeNet 12-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  ResNet50 v1-12-int8 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  ResNet50 v1-12-int8 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
  super-resolution-10 - CPU - Parallel:
    Inference Time Cost (ms)
    Inferences Per Second
  super-resolution-10 - CPU - Standard:
    Inference Time Cost (ms)
    Inferences Per Second
Whisperfile
SVT-AV1
LiteRT:
  Inception V4
  Inception ResNet V2
  NASNet Mobile
  Quantized COCO SSD MobileNet v1
  Mobilenet Float
  DeepLab V3
  SqueezeNet
  Mobilenet Quant
NAMD
Unvanquished:
  2560 x 1600 - Ultra
  2560 x 1440 - Ultra
  2560 x 1600 - High
ASTC Encoder
Unvanquished:
  2560 x 1440 - High
  1920 x 1200 - Ultra
  1920 x 1080 - Ultra
WarpX
Opus Codec Encoding
Unvanquished
oneDNN
Unvanquished:
  1920 x 1200 - High
  2560 x 1440 - Medium
  1920 x 1080 - High
SVT-AV1:
  Preset 8 - Bosphorus 1080p
  Preset 13 - Bosphorus 4K
ASTC Encoder
Unvanquished:
  1920 x 1200 - Medium
  1920 x 1080 - Medium
Primesieve
oneDNN:
  IP Shapes 1D - CPU
  IP Shapes 3D - CPU
SVT-AV1
oneDNN:
  Convolution Batch Shapes Auto - CPU
  Deconvolution Batch shapes_3d - CPU