ss

AMD Ryzen 7 4700U testing with a LENOVO LNVNB161216 (DTCN18WWV1.04 BIOS) and AMD Renoir 512MB on Ubuntu 23.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2310196-NE-SS374327796
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

AV1 2 Tests
CPU Massive 4 Tests
Creator Workloads 9 Tests
Encoding 3 Tests
Game Development 2 Tests
HPC - High Performance Computing 4 Tests
Machine Learning 3 Tests
Multi-Core 10 Tests
Intel oneAPI 5 Tests
Server CPU Tests 3 Tests
Video Encoding 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
October 18 2023
  4 Hours, 35 Minutes
b
October 19 2023
  4 Hours, 33 Minutes
c
October 19 2023
  4 Hours, 33 Minutes
Invert Hiding All Results Option
  4 Hours, 34 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


ssOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 7 4700U @ 2.00GHz (8 Cores)LENOVO LNVNB161216 (DTCN18WWV1.04 BIOS)AMD Renoir/Cezanne16GB512GB SAMSUNG MZALQ512HALU-000L2AMD Renoir 512MB (1600/400MHz)AMD Renoir Radeon HD AudioIntel Wi-Fi 6 AX200Ubuntu 23.046.2.0-24-generic (x86_64)GNOME Shell 44.2X Server + Wayland4.6 Mesa 23.0.2 (LLVM 15.0.7 DRM 3.49)GCC 12.2.0GCC 12.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLCompilersFile-SystemScreen ResolutionSs PerformanceSystem Logs- Transparent Huge Pages: madvise- a: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-Pa930Z/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-Pa930Z/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - b: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-Pa930Z/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-Pa930Z/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - c: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-DAPbBt/gcc-12-12.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-DAPbBt/gcc-12-12.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - Platform Profile: balanced - CPU Microcode: 0x8600102 - ACPI Profile: balanced - a: OpenJDK Runtime Environment (build 17.0.7+7-Ubuntu-0ubuntu123.04) - b: OpenJDK Runtime Environment (build 17.0.7+7-Ubuntu-0ubuntu123.04) - c: OpenJDK Runtime Environment (build 17.0.8.1+1-Ubuntu-0ubuntu123.04) - a: Python 3.11.2- b: Python 3.11.2- c: Python 3.11.4- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT disabled + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

abcResult OverviewPhoronix Test Suite100%101%101%102%103%BRL-CADSVT-AV1libavif avifencOpenVKLVVenCOpenRadiossOpenVINONCNNoneDNNeasyWaveEmbreeTimed GCC CompilationIntel Open Image Denoise

sssvt-av1: Preset 8 - Bosphorus 1080pavifenc: 6, Losslesssvt-av1: Preset 4 - Bosphorus 1080popenvino: Person Detection FP32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUopenvino: Person Detection FP32 - CPUbrl-cad: VGR Performance Metricopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUsvt-av1: Preset 12 - Bosphorus 1080pncnn: CPU - resnet50ncnn: CPU - squeezenet_ssdncnn: CPU-v3-v3 - mobilenet-v3openvino: Person Detection FP16 - CPUopenvkl: vklBenchmarkCPU Scalarncnn: CPU - regnety_400mopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenradioss: Bumper Beamopenvino: Person Vehicle Bike Detection FP16 - CPUavifenc: 2openvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUncnn: CPU - blazefaceopenvino: Handwritten English Recognition FP16 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Face Detection Retail FP16 - CPUncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetsvt-av1: Preset 8 - Bosphorus 4Kncnn: CPU - FastestDetopenvino: Weld Porosity Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUsvt-av1: Preset 13 - Bosphorus 4Kncnn: CPU - efficientnet-b0onednn: Recurrent Neural Network Training - f32 - CPUsvt-av1: Preset 12 - Bosphorus 4Konednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUsvt-av1: Preset 4 - Bosphorus 4Kopenradioss: Bird Strike on Windshieldavifenc: 0openvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUncnn: CPU - shufflenet-v2avifenc: 6onednn: Recurrent Neural Network Inference - u8s8f32 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUvvenc: Bosphorus 4K - Fasterncnn: CPU - googlenetopenvino: Road Segmentation ADAS FP16 - CPUsvt-av1: Preset 13 - Bosphorus 1080ponednn: Recurrent Neural Network Training - u8s8f32 - CPUncnn: CPU - mnasnetopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUvvenc: Bosphorus 1080p - Fastavifenc: 10, Losslessembree: Pathtracer ISPC - Crownncnn: CPU - vision_transformervvenc: Bosphorus 1080p - Fasteropenvino: Face Detection FP16-INT8 - CPUvvenc: Bosphorus 4K - Fastopenvino: Age Gender Recognition Retail 0013 FP16 - CPUncnn: CPU - alexnetonednn: Recurrent Neural Network Inference - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUopenradioss: Cell Phone Drop Testopenradioss: INIVOL and Fluid Structure Interaction Drop Containeronednn: IP Shapes 1D - f32 - CPUembree: Pathtracer - Crownopenvino: Weld Porosity Detection FP16-INT8 - CPUncnn: CPU - yolov4-tinyopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUembree: Pathtracer - Asian Dragon Objopenvino: Vehicle Detection FP16-INT8 - CPUncnn: CPU - resnet18onednn: IP Shapes 3D - u8s8f32 - CPUembree: Pathtracer ISPC - Asian Dragonncnn: CPU - vgg16openradioss: Rubber O-Ring Seal Installationbuild-gcc: Time To Compileeasywave: e2Asean Grid + BengkuluSept2007 Source - 2400onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUopenvino: Face Detection FP16-INT8 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: IP Shapes 3D - f32 - CPUeasywave: e2Asean Grid + BengkuluSept2007 Source - 240easywave: e2Asean Grid + BengkuluSept2007 Source - 1200embree: Pathtracer - Asian Dragononednn: Convolution Batch Shapes Auto - u8s8f32 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUembree: Pathtracer ISPC - Asian Dragon Objopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvkl: vklBenchmarkCPU ISPCoidn: RTLightmap.hdr.4096x4096 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyonednn: IP Shapes 1D - bf16bf16bf16 - CPUabc47.46619.6495.50413.1710.4334303.38153149.8680.185.3054172.62329.5818.064.5313.28478.32230.97300.9517.29327.2187.25121.6721.3385.921.1846.5210.41382.876.5723.3317.274.7426.755097.25149.417.4538442.7868.556461.7938.84.363291.574499.42270.50294.2942.3761.563.5213.5124113.764.944.94717.6518.79211.7236724.774.57212.752569.231.556737.347.6058.2854.2922193.6417.3772.052.3562625.679.994149.543.42484193.951253.5910.54254.7585204.5638.1519.5330.295.1118131.9312.643.49165.324990.47446.331991.4991073.6354152.971944.1933.052616.362327.084440.8365.693630.653463.384.61320.751.498.6890.120.240.2445.57120.3295.38213.5510.7266295.247943248.7382.025.44214169.67129.1418.494.5813.51478.48235.81295.9316.95329.4190.52123.49520.9785.321.1946.8410.28387.956.523.6717.2544.6826.535149.13150.67.4528942.2858.656534.438.684.351541.56497.06270.15693.8242.5961.53.5513.5374095.2564.984.90717.6718.88211.7216686.994.54211.662563.031.566696.237.6228.3334.2996194.6417.3412.062.3482616.419.984133.133.41832194.051249.1210.51814.7405204.5438.2619.5330.395.0954131.5112.683.495915.321390.72446.631990.3471072.5784145.961940.133.029716.375427.12441.1375.699430.6772462.944.61390.751.498.6890.120.240.2445.44619.6135.32713.2810.4516300.858105548.5882.265.35481168.52128.8918.264.6313.22468.5231.16302.1317.28323.25190.58123.82420.9686.791.246.0710.44382.016.4723.5617.0264.7426.875162.26148.737.5433142.3978.646523.0438.3744.396311.558494.53272.7793.4542.7561.033.5213.6274079.2565.484.92617.7818.92210.3116731.814.57211.362552.541.566703.847.5778.2854.2759194.5217.292.062.3452614.5110.024145.863.43182193.291253.910.50254.749203.838.1219.630.395.1121131.5112.653.484935.337590.54445.431995.4741071.1584155.311941.2333.097116.346727.073440.4795.693530.6843463.234.60970.751.498.6890.120.240.24OpenBenchmarking.org

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 1080pabc112233445547.4745.5745.451. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6, Losslessabc51015202519.6520.3319.611. (CXX) g++ options: -O3 -fPIC -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 1080pabc1.23842.47683.71524.95366.1925.5045.3825.3271. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP32 - Device: CPUabc369121513.1713.5513.281. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUabc369121510.4310.7310.45MIN: 9.58MIN: 9.4MIN: 9.171. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP32 - Device: CPUabc70140210280350303.30295.24300.85MIN: 194.13 / MAX: 337.4MIN: 260.56 / MAX: 327.23MIN: 246.14 / MAX: 323.821. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance Metricabc20K40K60K80K100K8153179432810551. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUabc112233445549.8648.7348.581. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUabc2040608010080.1882.0282.26MIN: 56.84 / MAX: 111.31MIN: 60.11 / MAX: 105.01MIN: 54.73 / MAX: 109.611. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUabc1.22452.4493.67354.8986.12255.305405.442145.35481MIN: 4.89MIN: 4.97MIN: 4.831. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 1080pabc4080120160200172.62169.67168.521. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50abc71421283529.5829.1428.89MIN: 28.4 / MAX: 115.97MIN: 28.67 / MAX: 38MIN: 28.59 / MAX: 29.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdabc51015202518.0618.4918.26MIN: 17.71 / MAX: 20.14MIN: 17.85 / MAX: 19.43MIN: 17.82 / MAX: 19.171. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3abc1.04182.08363.12544.16725.2094.534.584.63MIN: 4.43 / MAX: 5.84MIN: 4.44 / MAX: 6.01MIN: 4.48 / MAX: 6.441. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUabc369121513.2813.5113.221. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU Scalarabc1122334455474746MIN: 3 / MAX: 846MIN: 3 / MAX: 847MIN: 3 / MAX: 846

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mabc2468108.328.488.50MIN: 8.16 / MAX: 9.77MIN: 8.31 / MAX: 9.76MIN: 8.34 / MAX: 9.821. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUabc50100150200250230.97235.81231.16MIN: 160.47 / MAX: 259.14MIN: 185.64 / MAX: 265.48MIN: 180.68 / MAX: 265.271. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUabc70140210280350300.95295.93302.13MIN: 212.47 / MAX: 325.32MIN: 261.78 / MAX: 327.58MIN: 203.75 / MAX: 324.791. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUabc4812162017.2916.9517.281. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bumper Beamabc70140210280350327.20329.40323.25

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUabc4080120160200187.25190.52190.581. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 2abc306090120150121.67123.50123.821. (CXX) g++ options: -O3 -fPIC -lm

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUabc51015202521.3320.9720.96MIN: 13.65 / MAX: 41.01MIN: 17.73 / MAX: 34.13MIN: 16.75 / MAX: 35.051. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16 - Device: CPUabc2040608010085.9285.3286.79MIN: 64.99 / MAX: 113.89MIN: 60.39 / MAX: 115.66MIN: 68.17 / MAX: 106.821. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefaceabc0.270.540.811.081.351.181.191.20MIN: 1.15 / MAX: 1.31MIN: 1.14 / MAX: 2.05MIN: 1.16 / MAX: 1.891. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16 - Device: CPUabc112233445546.5246.8446.071. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16 - Device: CPUabc369121510.4110.2810.44MIN: 8.3 / MAX: 26.03MIN: 7.77 / MAX: 27.86MIN: 8.31 / MAX: 26.721. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16 - Device: CPUabc80160240320400382.87387.95382.011. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2abc2468106.576.506.47MIN: 6.35 / MAX: 7.94MIN: 6.29 / MAX: 15.2MIN: 6.28 / MAX: 7.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetabc61218243023.3323.6723.56MIN: 22.94 / MAX: 31.66MIN: 23.24 / MAX: 31.55MIN: 23.13 / MAX: 25.791. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 4Kabc4812162017.2717.2517.031. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetabc1.06652.1333.19954.2665.33254.744.684.74MIN: 4.69 / MAX: 5.22MIN: 4.62 / MAX: 5.16MIN: 4.68 / MAX: 5.31. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16 - Device: CPUabc61218243026.7526.5326.87MIN: 20.55 / MAX: 37.44MIN: 20.36 / MAX: 38.12MIN: 20.33 / MAX: 43.941. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUabc110022003300440055005097.255149.135162.261. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16 - Device: CPUabc306090120150149.41150.60148.731. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUabc2468107.453847.452897.54331MIN: 6.49MIN: 6.46MIN: 6.351. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 4Kabc102030405042.7942.2942.401. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0abc2468108.558.658.64MIN: 8.36 / MAX: 16.45MIN: 8.49 / MAX: 10.28MIN: 8.52 / MAX: 10.131. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUabc140028004200560070006461.796534.406523.04MIN: 6406.51MIN: 6472.29MIN: 6457.151. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 4Kabc91827364538.8038.6838.371. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUabc0.98921.97842.96763.95684.9464.363294.351544.39631MIN: 3.95MIN: 4.04MIN: 4.071. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 4Kabc0.35420.70841.06261.41681.7711.5741.5601.5581. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bird Strike on Windshieldabc110220330440550499.42497.06494.53

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 0abc60120180240300270.50270.16272.771. (CXX) g++ options: -O3 -fPIC -lm

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16 - Device: CPUabc2040608010094.2993.8293.451. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16 - Device: CPUabc102030405042.3742.5942.75MIN: 24.69 / MAX: 62.44MIN: 31.36 / MAX: 60.83MIN: 26.04 / MAX: 60.311. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUabc142842567061.5661.5061.031. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2abc0.79881.59762.39643.19523.9943.523.553.52MIN: 3.45 / MAX: 4.33MIN: 3.49 / MAX: 4.42MIN: 3.46 / MAX: 4.761. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6abc4812162013.5113.5413.631. (CXX) g++ options: -O3 -fPIC -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUabc90018002700360045004113.704095.254079.25MIN: 4070.68MIN: 4059.62MIN: 4040.341. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUabc153045607564.9464.9865.48MIN: 48.91 / MAX: 89.37MIN: 41.15 / MAX: 98.79MIN: 44.92 / MAX: 95.541. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fasterabc1.11312.22623.33934.45245.56554.9474.9074.9261. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetabc4812162017.6517.6717.78MIN: 17.27 / MAX: 19.24MIN: 17.22 / MAX: 25.89MIN: 17.33 / MAX: 20.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16 - Device: CPUabc51015202518.7918.8818.921. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 1080pabc50100150200250211.72211.72210.311. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUabc140028004200560070006724.776686.996731.81MIN: 6697.44MIN: 6654.65MIN: 6695.641. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetabc1.02832.05663.08494.11325.14154.574.544.57MIN: 4.46 / MAX: 5.89MIN: 4.44 / MAX: 5.81MIN: 4.49 / MAX: 6.071. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16 - Device: CPUabc50100150200250212.75211.66211.36MIN: 96.01 / MAX: 240.84MIN: 123.15 / MAX: 247.61MIN: 176.61 / MAX: 241.251. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16 - Device: CPUabc60012001800240030002569.232563.032552.54MIN: 2507.71 / MAX: 2629.27MIN: 2500.02 / MAX: 2625.07MIN: 2468.86 / MAX: 2667.961. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16 - Device: CPUabc0.3510.7021.0531.4041.7551.551.561.561. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUabc140028004200560070006737.346696.236703.84MIN: 6702.31MIN: 6662.07MIN: 6668.771. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fastabc2468107.6057.6227.5771. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 10, Losslessabc2468108.2858.3338.2851. (CXX) g++ options: -O3 -fPIC -lm

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crownabc0.96741.93482.90223.86964.8374.29224.29964.2759MIN: 4.27 / MAX: 4.33MIN: 4.27 / MAX: 4.35MIN: 4.26 / MAX: 4.32

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerabc4080120160200193.64194.64194.52MIN: 192.59 / MAX: 202.18MIN: 193.96 / MAX: 228.43MIN: 193.79 / MAX: 203.391. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fasterabc4812162017.3817.3417.291. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUabc0.46350.9271.39051.8542.31752.052.062.061. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fastabc0.53011.06021.59032.12042.65052.3562.3482.3451. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUabc60012001800240030002625.672616.412614.511. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetabc36912159.999.9810.02MIN: 9.84 / MAX: 11.09MIN: 9.81 / MAX: 11.32MIN: 9.88 / MAX: 11.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUabc90018002700360045004149.544133.134145.86MIN: 4116.1MIN: 4094.47MIN: 4107.561. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUabc0.77221.54442.31663.08883.8613.424843.418323.43182MIN: 3.26MIN: 3.11MIN: 3.231. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Cell Phone Drop Testabc4080120160200193.95194.05193.29

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: INIVOL and Fluid Structure Interaction Drop Containerabc300600900120015001253.591249.121253.90

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUabc369121510.5410.5210.50MIN: 10.06MIN: 10.06MIN: 9.991. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crownabc1.07072.14143.21214.28285.35354.75854.74054.7490MIN: 4.74 / MAX: 4.81MIN: 4.72 / MAX: 4.8MIN: 4.73 / MAX: 4.79

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUabc4080120160200204.56204.54203.801. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinyabc91827364538.1538.2638.12MIN: 37.7 / MAX: 41.2MIN: 37.87 / MAX: 46.83MIN: 37.67 / MAX: 38.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUabc51015202519.5319.5319.60MIN: 14.45 / MAX: 37.23MIN: 14.42 / MAX: 33.59MIN: 14.38 / MAX: 34.491. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUabc71421283530.2930.3930.39MIN: 19.37 / MAX: 50.54MIN: 22.38 / MAX: 63.1MIN: 19.84 / MAX: 56.561. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Objabc1.15022.30043.45064.60085.7515.11185.09545.1121MIN: 5.09 / MAX: 5.16MIN: 5.08 / MAX: 5.15MIN: 5.09 / MAX: 5.16

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUabc306090120150131.93131.51131.511. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18abc369121512.6412.6812.65MIN: 12.43 / MAX: 14.45MIN: 12.52 / MAX: 14.32MIN: 12.48 / MAX: 14.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUabc0.78661.57322.35983.14643.9333.491603.495913.48493MIN: 3.41MIN: 3.43MIN: 3.411. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragonabc1.20092.40183.60274.80366.00455.32495.32135.3375MIN: 5.31 / MAX: 5.38MIN: 5.3 / MAX: 5.38MIN: 5.32 / MAX: 5.38

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16abc2040608010090.4790.7290.54MIN: 89.96 / MAX: 98.67MIN: 90.13 / MAX: 99.74MIN: 89.98 / MAX: 100.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Rubber O-Ring Seal Installationabc100200300400500446.33446.63445.43

Timed GCC Compilation

This test times how long it takes to build the GNU Compiler Collection (GCC) open-source compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 13.2Time To Compileabc4008001200160020001991.501990.351995.47

easyWave

The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400abc20040060080010001073.641072.581071.161. (CXX) g++ options: -O3 -fopenmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUabc90018002700360045004152.974145.964155.31MIN: 4116.58MIN: 4110MIN: 4118.921. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUabc4008001200160020001944.191940.101941.23MIN: 1885.71 / MAX: 1969.4MIN: 1881.22 / MAX: 1969.48MIN: 1871.92 / MAX: 1958.11. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUabc81624324033.0533.0333.10MIN: 32.33MIN: 32.6MIN: 32.641. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUabc4812162016.3616.3816.35MIN: 15.38MIN: 15.65MIN: 15.621. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

easyWave

The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240abc61218243027.0827.1227.071. (CXX) g++ options: -O3 -fopenmp

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200abc100200300400500440.84441.14440.481. (CXX) g++ options: -O3 -fopenmp

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragonabc1.28242.56483.84725.12966.4125.69365.69945.6935MIN: 5.67 / MAX: 5.75MIN: 5.68 / MAX: 5.76MIN: 5.67 / MAX: 5.74

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUabc71421283530.6530.6830.68MIN: 30.44MIN: 30.51MIN: 30.431. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUabc100200300400500463.38462.94463.231. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Objabc1.03812.07623.11434.15245.19054.61324.61394.6097MIN: 4.6 / MAX: 4.65MIN: 4.6 / MAX: 4.66MIN: 4.59 / MAX: 4.66

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUabc0.16880.33760.50640.67520.8440.750.750.75MIN: 0.52 / MAX: 12.69MIN: 0.52 / MAX: 11.08MIN: 0.52 / MAX: 11.781. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUabc0.33530.67061.00591.34121.67651.491.491.49MIN: 1.03 / MAX: 16.09MIN: 0.98 / MAX: 18.34MIN: 1.13 / MAX: 19.21. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUabc2468108.68.68.6MIN: 6.16 / MAX: 18.26MIN: 6.19 / MAX: 21.79MIN: 6.18 / MAX: 18.511. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU ISPCabc20406080100898989MIN: 7 / MAX: 1183MIN: 7 / MAX: 1185MIN: 7 / MAX: 1167

Intel Open Image Denoise

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RTLightmap.hdr.4096x4096 - Device: CPU-Onlyabc0.0270.0540.0810.1080.1350.120.120.12

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Onlyabc0.0540.1080.1620.2160.270.240.240.24

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Onlyabc0.0540.1080.1620.2160.270.240.240.24

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

Test: Writes

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

107 Results Shown

SVT-AV1
libavif avifenc
SVT-AV1
OpenVINO
oneDNN
OpenVINO
BRL-CAD
OpenVINO:
  Handwritten English Recognition FP16-INT8 - CPU:
    FPS
    ms
oneDNN
SVT-AV1
NCNN:
  CPU - resnet50
  CPU - squeezenet_ssd
  CPU-v3-v3 - mobilenet-v3
OpenVINO
OpenVKL
NCNN
OpenVINO:
  Machine Translation EN To DE FP16 - CPU
  Person Detection FP16 - CPU
  Machine Translation EN To DE FP16 - CPU
OpenRadioss
OpenVINO
libavif avifenc
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU
  Handwritten English Recognition FP16 - CPU
NCNN
OpenVINO:
  Handwritten English Recognition FP16 - CPU
  Face Detection Retail FP16 - CPU
  Face Detection Retail FP16 - CPU
NCNN:
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
SVT-AV1
NCNN
OpenVINO:
  Weld Porosity Detection FP16 - CPU
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU
  Weld Porosity Detection FP16 - CPU
oneDNN
SVT-AV1
NCNN
oneDNN
SVT-AV1
oneDNN
SVT-AV1
OpenRadioss
libavif avifenc
OpenVINO:
  Vehicle Detection FP16 - CPU:
    FPS
    ms
  Road Segmentation ADAS FP16-INT8 - CPU:
    FPS
NCNN
libavif avifenc
oneDNN
OpenVINO
VVenC
NCNN
OpenVINO
SVT-AV1
oneDNN
NCNN
OpenVINO:
  Road Segmentation ADAS FP16 - CPU
  Face Detection FP16 - CPU
  Face Detection FP16 - CPU
oneDNN
VVenC
libavif avifenc
Embree
NCNN
VVenC
OpenVINO
VVenC
OpenVINO
NCNN
oneDNN:
  Recurrent Neural Network Inference - f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
OpenRadioss:
  Cell Phone Drop Test
  INIVOL and Fluid Structure Interaction Drop Container
oneDNN
Embree
OpenVINO
NCNN
OpenVINO:
  Weld Porosity Detection FP16-INT8 - CPU
  Vehicle Detection FP16-INT8 - CPU
Embree
OpenVINO
NCNN
oneDNN
Embree
NCNN
OpenRadioss
Timed GCC Compilation
easyWave
oneDNN
OpenVINO
oneDNN:
  Convolution Batch Shapes Auto - f32 - CPU
  IP Shapes 3D - f32 - CPU
easyWave:
  e2Asean Grid + BengkuluSept2007 Source - 240
  e2Asean Grid + BengkuluSept2007 Source - 1200
Embree
oneDNN
OpenVINO
Embree
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU
  Age Gender Recognition Retail 0013 FP16 - CPU
  Face Detection Retail FP16-INT8 - CPU
OpenVKL
Intel Open Image Denoise:
  RTLightmap.hdr.4096x4096 - CPU-Only
  RT.ldr_alb_nrm.3840x2160 - CPU-Only
  RT.hdr_alb_nrm.3840x2160 - CPU-Only