ss

AMD Ryzen 7 4700U testing with a LENOVO LNVNB161216 (DTCN18WWV1.04 BIOS) and AMD Renoir 512MB on Ubuntu 23.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2310196-NE-SS374327796
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts

Limit displaying results to tests within:

AV1 2 Tests
CPU Massive 4 Tests
Creator Workloads 9 Tests
Encoding 3 Tests
Game Development 2 Tests
HPC - High Performance Computing 4 Tests
Machine Learning 3 Tests
Multi-Core 10 Tests
Intel oneAPI 5 Tests
Server CPU Tests 3 Tests
Video Encoding 3 Tests

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Geometric Means Per-Suite/Category
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
a
October 18 2023
  4 Hours, 35 Minutes
b
October 19 2023
  4 Hours, 33 Minutes
c
October 19 2023
  4 Hours, 33 Minutes
Invert Hiding All Results Option
  4 Hours, 34 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


ssOpenBenchmarking.orgPhoronix Test SuiteAMD Ryzen 7 4700U @ 2.00GHz (8 Cores)LENOVO LNVNB161216 (DTCN18WWV1.04 BIOS)AMD Renoir/Cezanne16GB512GB SAMSUNG MZALQ512HALU-000L2AMD Renoir 512MB (1600/400MHz)AMD Renoir Radeon HD AudioIntel Wi-Fi 6 AX200Ubuntu 23.046.2.0-24-generic (x86_64)GNOME Shell 44.2X Server + Wayland4.6 Mesa 23.0.2 (LLVM 15.0.7 DRM 3.49)GCC 12.2.0GCC 12.3.0ext41920x1080ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLCompilersFile-SystemScreen ResolutionSs PerformanceSystem Logs- Transparent Huge Pages: madvise- a: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-Pa930Z/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-Pa930Z/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - b: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-Pa930Z/gcc-12-12.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-Pa930Z/gcc-12-12.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - c: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-defaulted --enable-offload-targets=nvptx-none=/build/gcc-12-DAPbBt/gcc-12-12.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-12-DAPbBt/gcc-12-12.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: acpi-cpufreq schedutil (Boost: Enabled) - Platform Profile: balanced - CPU Microcode: 0x8600102 - ACPI Profile: balanced - a: OpenJDK Runtime Environment (build 17.0.7+7-Ubuntu-0ubuntu123.04) - b: OpenJDK Runtime Environment (build 17.0.7+7-Ubuntu-0ubuntu123.04) - c: OpenJDK Runtime Environment (build 17.0.8.1+1-Ubuntu-0ubuntu123.04) - a: Python 3.11.2- b: Python 3.11.2- c: Python 3.11.4- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT disabled + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: disabled RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected

abcResult OverviewPhoronix Test Suite100%101%101%102%103%BRL-CADSVT-AV1libavif avifencOpenVKLVVenCOpenRadiossOpenVINONCNNoneDNNeasyWaveEmbreeTimed GCC CompilationIntel Open Image Denoise

sssvt-av1: Preset 8 - Bosphorus 1080pavifenc: 6, Losslesssvt-av1: Preset 4 - Bosphorus 1080popenvino: Person Detection FP32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUopenvino: Person Detection FP32 - CPUbrl-cad: VGR Performance Metricopenvino: Handwritten English Recognition FP16-INT8 - CPUopenvino: Handwritten English Recognition FP16-INT8 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUsvt-av1: Preset 12 - Bosphorus 1080pncnn: CPU - resnet50ncnn: CPU - squeezenet_ssdncnn: CPU-v3-v3 - mobilenet-v3openvino: Person Detection FP16 - CPUopenvkl: vklBenchmarkCPU Scalarncnn: CPU - regnety_400mopenvino: Machine Translation EN To DE FP16 - CPUopenvino: Person Detection FP16 - CPUopenvino: Machine Translation EN To DE FP16 - CPUopenradioss: Bumper Beamopenvino: Person Vehicle Bike Detection FP16 - CPUavifenc: 2openvino: Person Vehicle Bike Detection FP16 - CPUopenvino: Handwritten English Recognition FP16 - CPUncnn: CPU - blazefaceopenvino: Handwritten English Recognition FP16 - CPUopenvino: Face Detection Retail FP16 - CPUopenvino: Face Detection Retail FP16 - CPUncnn: CPU-v2-v2 - mobilenet-v2ncnn: CPU - mobilenetsvt-av1: Preset 8 - Bosphorus 4Kncnn: CPU - FastestDetopenvino: Weld Porosity Detection FP16 - CPUopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Weld Porosity Detection FP16 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUsvt-av1: Preset 13 - Bosphorus 4Kncnn: CPU - efficientnet-b0onednn: Recurrent Neural Network Training - f32 - CPUsvt-av1: Preset 12 - Bosphorus 4Konednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUsvt-av1: Preset 4 - Bosphorus 4Kopenradioss: Bird Strike on Windshieldavifenc: 0openvino: Vehicle Detection FP16 - CPUopenvino: Vehicle Detection FP16 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUncnn: CPU - shufflenet-v2avifenc: 6onednn: Recurrent Neural Network Inference - u8s8f32 - CPUopenvino: Road Segmentation ADAS FP16-INT8 - CPUvvenc: Bosphorus 4K - Fasterncnn: CPU - googlenetopenvino: Road Segmentation ADAS FP16 - CPUsvt-av1: Preset 13 - Bosphorus 1080ponednn: Recurrent Neural Network Training - u8s8f32 - CPUncnn: CPU - mnasnetopenvino: Road Segmentation ADAS FP16 - CPUopenvino: Face Detection FP16 - CPUopenvino: Face Detection FP16 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUvvenc: Bosphorus 1080p - Fastavifenc: 10, Losslessembree: Pathtracer ISPC - Crownncnn: CPU - vision_transformervvenc: Bosphorus 1080p - Fasteropenvino: Face Detection FP16-INT8 - CPUvvenc: Bosphorus 4K - Fastopenvino: Age Gender Recognition Retail 0013 FP16 - CPUncnn: CPU - alexnetonednn: Recurrent Neural Network Inference - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUopenradioss: Cell Phone Drop Testopenradioss: INIVOL and Fluid Structure Interaction Drop Containeronednn: IP Shapes 1D - f32 - CPUembree: Pathtracer - Crownopenvino: Weld Porosity Detection FP16-INT8 - CPUncnn: CPU - yolov4-tinyopenvino: Weld Porosity Detection FP16-INT8 - CPUopenvino: Vehicle Detection FP16-INT8 - CPUembree: Pathtracer - Asian Dragon Objopenvino: Vehicle Detection FP16-INT8 - CPUncnn: CPU - resnet18onednn: IP Shapes 3D - u8s8f32 - CPUembree: Pathtracer ISPC - Asian Dragonncnn: CPU - vgg16openradioss: Rubber O-Ring Seal Installationbuild-gcc: Time To Compileeasywave: e2Asean Grid + BengkuluSept2007 Source - 2400onednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUopenvino: Face Detection FP16-INT8 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: IP Shapes 3D - f32 - CPUeasywave: e2Asean Grid + BengkuluSept2007 Source - 240easywave: e2Asean Grid + BengkuluSept2007 Source - 1200embree: Pathtracer - Asian Dragononednn: Convolution Batch Shapes Auto - u8s8f32 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUembree: Pathtracer ISPC - Asian Dragon Objopenvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPUopenvino: Age Gender Recognition Retail 0013 FP16 - CPUopenvino: Face Detection Retail FP16-INT8 - CPUopenvkl: vklBenchmarkCPU ISPCoidn: RTLightmap.hdr.4096x4096 - CPU-Onlyoidn: RT.ldr_alb_nrm.3840x2160 - CPU-Onlyoidn: RT.hdr_alb_nrm.3840x2160 - CPU-Onlyonednn: IP Shapes 1D - bf16bf16bf16 - CPUabc47.46619.6495.50413.1710.4334303.38153149.8680.185.3054172.62329.5818.064.5313.28478.32230.97300.9517.29327.2187.25121.6721.3385.921.1846.5210.41382.876.5723.3317.274.7426.755097.25149.417.4538442.7868.556461.7938.84.363291.574499.42270.50294.2942.3761.563.5213.5124113.764.944.94717.6518.79211.7236724.774.57212.752569.231.556737.347.6058.2854.2922193.6417.3772.052.3562625.679.994149.543.42484193.951253.5910.54254.7585204.5638.1519.5330.295.1118131.9312.643.49165.324990.47446.331991.4991073.6354152.971944.1933.052616.362327.084440.8365.693630.653463.384.61320.751.498.6890.120.240.2445.57120.3295.38213.5510.7266295.247943248.7382.025.44214169.67129.1418.494.5813.51478.48235.81295.9316.95329.4190.52123.49520.9785.321.1946.8410.28387.956.523.6717.2544.6826.535149.13150.67.4528942.2858.656534.438.684.351541.56497.06270.15693.8242.5961.53.5513.5374095.2564.984.90717.6718.88211.7216686.994.54211.662563.031.566696.237.6228.3334.2996194.6417.3412.062.3482616.419.984133.133.41832194.051249.1210.51814.7405204.5438.2619.5330.395.0954131.5112.683.495915.321390.72446.631990.3471072.5784145.961940.133.029716.375427.12441.1375.699430.6772462.944.61390.751.498.6890.120.240.2445.44619.6135.32713.2810.4516300.858105548.5882.265.35481168.52128.8918.264.6313.22468.5231.16302.1317.28323.25190.58123.82420.9686.791.246.0710.44382.016.4723.5617.0264.7426.875162.26148.737.5433142.3978.646523.0438.3744.396311.558494.53272.7793.4542.7561.033.5213.6274079.2565.484.92617.7818.92210.3116731.814.57211.362552.541.566703.847.5778.2854.2759194.5217.292.062.3452614.5110.024145.863.43182193.291253.910.50254.749203.838.1219.630.395.1121131.5112.653.484935.337590.54445.431995.4741071.1584155.311941.2333.097116.346727.073440.4795.693530.6843463.234.60970.751.498.6890.120.240.24OpenBenchmarking.org

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 1080pcba112233445545.4545.5747.471. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6, Losslessbac51015202520.3319.6519.611. (CXX) g++ options: -O3 -fPIC -lm

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 1080pcba1.23842.47683.71524.95366.1925.3275.3825.5041. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP32 - Device: CPUacb369121513.1713.2813.551. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUbca369121510.7310.4510.43MIN: 9.4MIN: 9.17MIN: 9.581. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP32 - Device: CPUacb70140210280350303.30300.85295.24MIN: 194.13 / MAX: 337.4MIN: 246.14 / MAX: 323.82MIN: 260.56 / MAX: 327.231. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

BRL-CAD

BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgVGR Performance Metric, More Is BetterBRL-CAD 7.36VGR Performance Metricbca20K40K60K80K100K7943281055815311. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUcba112233445548.5848.7349.861. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16-INT8 - Device: CPUcba2040608010082.2682.0280.18MIN: 54.73 / MAX: 109.61MIN: 60.11 / MAX: 105.01MIN: 56.84 / MAX: 111.311. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUbca1.22452.4493.67354.8986.12255.442145.354815.30540MIN: 4.97MIN: 4.83MIN: 4.891. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 1080pcba4080120160200168.52169.67172.621. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet50abc71421283529.5829.1428.89MIN: 28.4 / MAX: 115.97MIN: 28.67 / MAX: 38MIN: 28.59 / MAX: 29.781. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: squeezenet_ssdbca51015202518.4918.2618.06MIN: 17.85 / MAX: 19.43MIN: 17.82 / MAX: 19.17MIN: 17.71 / MAX: 20.141. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v3-v3 - Model: mobilenet-v3cba1.04182.08363.12544.16725.2094.634.584.53MIN: 4.48 / MAX: 6.44MIN: 4.44 / MAX: 6.01MIN: 4.43 / MAX: 5.841. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUcab369121513.2213.2813.511. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU Scalarcab1122334455464747MIN: 3 / MAX: 846MIN: 3 / MAX: 846MIN: 3 / MAX: 847

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: regnety_400mcba2468108.508.488.32MIN: 8.34 / MAX: 9.82MIN: 8.31 / MAX: 9.76MIN: 8.16 / MAX: 9.771. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUbca50100150200250235.81231.16230.97MIN: 185.64 / MAX: 265.48MIN: 180.68 / MAX: 265.27MIN: 160.47 / MAX: 259.141. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Detection FP16 - Device: CPUcab70140210280350302.13300.95295.93MIN: 203.75 / MAX: 324.79MIN: 212.47 / MAX: 325.32MIN: 261.78 / MAX: 327.581. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Machine Translation EN To DE FP16 - Device: CPUbca4812162016.9517.2817.291. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bumper Beambac70140210280350329.40327.20323.25

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUabc4080120160200187.25190.52190.581. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 2cba306090120150123.82123.50121.671. (CXX) g++ options: -O3 -fPIC -lm

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Person Vehicle Bike Detection FP16 - Device: CPUabc51015202521.3320.9720.96MIN: 13.65 / MAX: 41.01MIN: 17.73 / MAX: 34.13MIN: 16.75 / MAX: 35.051. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16 - Device: CPUcab2040608010086.7985.9285.32MIN: 68.17 / MAX: 106.82MIN: 64.99 / MAX: 113.89MIN: 60.39 / MAX: 115.661. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: blazefacecba0.270.540.811.081.351.201.191.18MIN: 1.16 / MAX: 1.89MIN: 1.14 / MAX: 2.05MIN: 1.15 / MAX: 1.311. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Handwritten English Recognition FP16 - Device: CPUcab112233445546.0746.5246.841. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16 - Device: CPUcab369121510.4410.4110.28MIN: 8.31 / MAX: 26.72MIN: 8.3 / MAX: 26.03MIN: 7.77 / MAX: 27.861. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16 - Device: CPUcab80160240320400382.01382.87387.951. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU-v2-v2 - Model: mobilenet-v2abc2468106.576.506.47MIN: 6.35 / MAX: 7.94MIN: 6.29 / MAX: 15.2MIN: 6.28 / MAX: 7.431. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mobilenetbca61218243023.6723.5623.33MIN: 23.24 / MAX: 31.55MIN: 23.13 / MAX: 25.79MIN: 22.94 / MAX: 31.661. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 8 - Input: Bosphorus 4Kcba4812162017.0317.2517.271. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: FastestDetcab1.06652.1333.19954.2665.33254.744.744.68MIN: 4.68 / MAX: 5.3MIN: 4.69 / MAX: 5.22MIN: 4.62 / MAX: 5.161. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16 - Device: CPUcab61218243026.8726.7526.53MIN: 20.33 / MAX: 43.94MIN: 20.55 / MAX: 37.44MIN: 20.36 / MAX: 38.121. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUabc110022003300440055005097.255149.135162.261. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16 - Device: CPUcab306090120150148.73149.41150.601. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUcab2468107.543317.453847.45289MIN: 6.35MIN: 6.49MIN: 6.461. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 4Kbca102030405042.2942.4042.791. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: efficientnet-b0bca2468108.658.648.55MIN: 8.49 / MAX: 10.28MIN: 8.52 / MAX: 10.13MIN: 8.36 / MAX: 16.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUbca140028004200560070006534.406523.046461.79MIN: 6472.29MIN: 6457.15MIN: 6406.511. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 12 - Input: Bosphorus 4Kcba91827364538.3738.6838.801. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUcab0.98921.97842.96763.95684.9464.396314.363294.35154MIN: 4.07MIN: 3.95MIN: 4.041. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 4 - Input: Bosphorus 4Kcba0.35420.70841.06261.41681.7711.5581.5601.5741. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Bird Strike on Windshieldabc110220330440550499.42497.06494.53

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 0cab60120180240300272.77270.50270.161. (CXX) g++ options: -O3 -fPIC -lm

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16 - Device: CPUcba2040608010093.4593.8294.291. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16 - Device: CPUcba102030405042.7542.5942.37MIN: 26.04 / MAX: 60.31MIN: 31.36 / MAX: 60.83MIN: 24.69 / MAX: 62.441. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUcba142842567061.0361.5061.561. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: shufflenet-v2bca0.79881.59762.39643.19523.9943.553.523.52MIN: 3.49 / MAX: 4.42MIN: 3.46 / MAX: 4.76MIN: 3.45 / MAX: 4.331. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 6cba4812162013.6313.5413.511. (CXX) g++ options: -O3 -fPIC -lm

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUabc90018002700360045004113.704095.254079.25MIN: 4070.68MIN: 4059.62MIN: 4040.341. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16-INT8 - Device: CPUcba153045607565.4864.9864.94MIN: 44.92 / MAX: 95.54MIN: 41.15 / MAX: 98.79MIN: 48.91 / MAX: 89.371. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fasterbca1.11312.22623.33934.45245.56554.9074.9264.9471. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: googlenetcba4812162017.7817.6717.65MIN: 17.33 / MAX: 20.39MIN: 17.22 / MAX: 25.89MIN: 17.27 / MAX: 19.241. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16 - Device: CPUabc51015202518.7918.8818.921. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

SVT-AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterSVT-AV1 1.7Encoder Mode: Preset 13 - Input: Bosphorus 1080pcba50100150200250210.31211.72211.721. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUcab140028004200560070006731.816724.776686.99MIN: 6695.64MIN: 6697.44MIN: 6654.651. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: mnasnetcab1.02832.05663.08494.11325.14154.574.574.54MIN: 4.49 / MAX: 6.07MIN: 4.46 / MAX: 5.89MIN: 4.44 / MAX: 5.811. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Road Segmentation ADAS FP16 - Device: CPUabc50100150200250212.75211.66211.36MIN: 96.01 / MAX: 240.84MIN: 123.15 / MAX: 247.61MIN: 176.61 / MAX: 241.251. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16 - Device: CPUabc60012001800240030002569.232563.032552.54MIN: 2507.71 / MAX: 2629.27MIN: 2500.02 / MAX: 2625.07MIN: 2468.86 / MAX: 2667.961. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16 - Device: CPUabc0.3510.7021.0531.4041.7551.551.561.561. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUacb140028004200560070006737.346703.846696.23MIN: 6702.31MIN: 6668.77MIN: 6662.071. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fastcab2468107.5777.6057.6221. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 1.0Encoder Speed: 10, Losslessbca2468108.3338.2858.2851. (CXX) g++ options: -O3 -fPIC -lm

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Crowncab0.96741.93482.90223.86964.8374.27594.29224.2996MIN: 4.26 / MAX: 4.32MIN: 4.27 / MAX: 4.33MIN: 4.27 / MAX: 4.35

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vision_transformerbca4080120160200194.64194.52193.64MIN: 193.96 / MAX: 228.43MIN: 193.79 / MAX: 203.39MIN: 192.59 / MAX: 202.181. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 1080p - Video Preset: Fastercba4812162017.2917.3417.381. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUabc0.46350.9271.39051.8542.31752.052.062.061. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

VVenC

VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFrames Per Second, More Is BetterVVenC 1.9Video Input: Bosphorus 4K - Video Preset: Fastcba0.53011.06021.59032.12042.65052.3452.3482.3561. (CXX) g++ options: -O3 -flto=auto -fno-fat-lto-objects

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUcba60012001800240030002614.512616.412625.671. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: alexnetcab369121510.029.999.98MIN: 9.88 / MAX: 11.43MIN: 9.84 / MAX: 11.09MIN: 9.81 / MAX: 11.321. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUacb90018002700360045004149.544145.864133.13MIN: 4116.1MIN: 4107.56MIN: 4094.471. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUcab0.77221.54442.31663.08883.8613.431823.424843.41832MIN: 3.23MIN: 3.26MIN: 3.111. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Cell Phone Drop Testbac4080120160200194.05193.95193.29

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: INIVOL and Fluid Structure Interaction Drop Containercab300600900120015001253.901253.591249.12

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUabc369121510.5410.5210.50MIN: 10.06MIN: 10.06MIN: 9.991. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Crownbca1.07072.14143.21214.28285.35354.74054.74904.7585MIN: 4.72 / MAX: 4.8MIN: 4.73 / MAX: 4.79MIN: 4.74 / MAX: 4.81

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUcba4080120160200203.80204.54204.561. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: yolov4-tinybac91827364538.2638.1538.12MIN: 37.87 / MAX: 46.83MIN: 37.7 / MAX: 41.2MIN: 37.67 / MAX: 38.971. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Weld Porosity Detection FP16-INT8 - Device: CPUcba51015202519.6019.5319.53MIN: 14.38 / MAX: 34.49MIN: 14.42 / MAX: 33.59MIN: 14.45 / MAX: 37.231. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUcba71421283530.3930.3930.29MIN: 19.84 / MAX: 56.56MIN: 22.38 / MAX: 63.1MIN: 19.37 / MAX: 50.541. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragon Objbac1.15022.30043.45064.60085.7515.09545.11185.1121MIN: 5.08 / MAX: 5.15MIN: 5.09 / MAX: 5.16MIN: 5.09 / MAX: 5.16

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Vehicle Detection FP16-INT8 - Device: CPUbca306090120150131.51131.51131.931. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: resnet18bca369121512.6812.6512.64MIN: 12.52 / MAX: 14.32MIN: 12.48 / MAX: 14.77MIN: 12.43 / MAX: 14.451. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUbac0.78661.57322.35983.14643.9333.495913.491603.48493MIN: 3.43MIN: 3.41MIN: 3.411. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragonbac1.20092.40183.60274.80366.00455.32135.32495.3375MIN: 5.3 / MAX: 5.38MIN: 5.31 / MAX: 5.38MIN: 5.32 / MAX: 5.38

NCNN

NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterNCNN 20230517Target: CPU - Model: vgg16bca2040608010090.7290.5490.47MIN: 90.13 / MAX: 99.74MIN: 89.98 / MAX: 100.81MIN: 89.96 / MAX: 98.671. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread

OpenRadioss

OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterOpenRadioss 2023.09.15Model: Rubber O-Ring Seal Installationbac100200300400500446.63446.33445.43

Timed GCC Compilation

This test times how long it takes to build the GNU Compiler Collection (GCC) open-source compiler. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed GCC Compilation 13.2Time To Compilecab4008001200160020001995.471991.501990.35

easyWave

The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 2400abc20040060080010001073.641072.581071.161. (CXX) g++ options: -O3 -fopenmp

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUcab90018002700360045004155.314152.974145.96MIN: 4118.92MIN: 4116.58MIN: 41101. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection FP16-INT8 - Device: CPUacb4008001200160020001944.191941.231940.10MIN: 1885.71 / MAX: 1969.4MIN: 1871.92 / MAX: 1958.1MIN: 1881.22 / MAX: 1969.481. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUcab81624324033.1033.0533.03MIN: 32.64MIN: 32.33MIN: 32.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUbac4812162016.3816.3616.35MIN: 15.65MIN: 15.38MIN: 15.621. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

easyWave

The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240bac61218243027.1227.0827.071. (CXX) g++ options: -O3 -fopenmp

OpenBenchmarking.orgSeconds, Fewer Is BettereasyWave r34Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200bac100200300400500441.14440.84440.481. (CXX) g++ options: -O3 -fopenmp

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer - Model: Asian Dragoncab1.28242.56483.84725.12966.4125.69355.69365.6994MIN: 5.67 / MAX: 5.74MIN: 5.67 / MAX: 5.75MIN: 5.68 / MAX: 5.76

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 3.3Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUcba71421283530.6830.6830.65MIN: 30.43MIN: 30.51MIN: 30.441. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl

OpenVINO

OpenBenchmarking.orgFPS, More Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUbca100200300400500462.94463.23463.381. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

Embree

OpenBenchmarking.orgFrames Per Second, More Is BetterEmbree 4.3Binary: Pathtracer ISPC - Model: Asian Dragon Objcab1.03812.07623.11434.15245.19054.60974.61324.6139MIN: 4.59 / MAX: 4.66MIN: 4.6 / MAX: 4.65MIN: 4.6 / MAX: 4.66

OpenVINO

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPUcba0.16880.33760.50640.67520.8440.750.750.75MIN: 0.52 / MAX: 11.78MIN: 0.52 / MAX: 11.08MIN: 0.52 / MAX: 12.691. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Age Gender Recognition Retail 0013 FP16 - Device: CPUcba0.33530.67061.00591.34121.67651.491.491.49MIN: 1.13 / MAX: 19.2MIN: 0.98 / MAX: 18.34MIN: 1.03 / MAX: 16.091. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenBenchmarking.orgms, Fewer Is BetterOpenVINO 2023.1Model: Face Detection Retail FP16-INT8 - Device: CPUcba2468108.68.68.6MIN: 6.18 / MAX: 18.51MIN: 6.19 / MAX: 21.79MIN: 6.16 / MAX: 18.261. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl

OpenVKL

OpenVKL is the Intel Open Volume Kernel Library that offers high-performance volume computation kernels and part of the Intel oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgItems / Sec, More Is BetterOpenVKL 2.0.0Benchmark: vklBenchmarkCPU ISPCabc20406080100898989MIN: 7 / MAX: 1183MIN: 7 / MAX: 1185MIN: 7 / MAX: 1167

Intel Open Image Denoise

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RTLightmap.hdr.4096x4096 - Device: CPU-Onlyabc0.0270.0540.0810.1080.1350.120.120.12

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.ldr_alb_nrm.3840x2160 - Device: CPU-Onlyabc0.0540.1080.1620.2160.270.240.240.24

OpenBenchmarking.orgImages / Sec, More Is BetterIntel Open Image Denoise 2.1Run: RT.hdr_alb_nrm.3840x2160 - Device: CPU-Onlyabc0.0540.1080.1620.2160.270.240.240.24

Apache Cassandra

This is a benchmark of the Apache Cassandra NoSQL database management system making use of cassandra-stress. Learn more via the OpenBenchmarking.org test page.

Test: Writes

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

a: The test run did not produce a result.

b: The test run did not produce a result.

c: The test run did not produce a result.

107 Results Shown

SVT-AV1
libavif avifenc
SVT-AV1
OpenVINO
oneDNN
OpenVINO
BRL-CAD
OpenVINO:
  Handwritten English Recognition FP16-INT8 - CPU:
    FPS
    ms
oneDNN
SVT-AV1
NCNN:
  CPU - resnet50
  CPU - squeezenet_ssd
  CPU-v3-v3 - mobilenet-v3
OpenVINO
OpenVKL
NCNN
OpenVINO:
  Machine Translation EN To DE FP16 - CPU
  Person Detection FP16 - CPU
  Machine Translation EN To DE FP16 - CPU
OpenRadioss
OpenVINO
libavif avifenc
OpenVINO:
  Person Vehicle Bike Detection FP16 - CPU
  Handwritten English Recognition FP16 - CPU
NCNN
OpenVINO:
  Handwritten English Recognition FP16 - CPU
  Face Detection Retail FP16 - CPU
  Face Detection Retail FP16 - CPU
NCNN:
  CPU-v2-v2 - mobilenet-v2
  CPU - mobilenet
SVT-AV1
NCNN
OpenVINO:
  Weld Porosity Detection FP16 - CPU
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU
  Weld Porosity Detection FP16 - CPU
oneDNN
SVT-AV1
NCNN
oneDNN
SVT-AV1
oneDNN
SVT-AV1
OpenRadioss
libavif avifenc
OpenVINO:
  Vehicle Detection FP16 - CPU:
    FPS
    ms
  Road Segmentation ADAS FP16-INT8 - CPU:
    FPS
NCNN
libavif avifenc
oneDNN
OpenVINO
VVenC
NCNN
OpenVINO
SVT-AV1
oneDNN
NCNN
OpenVINO:
  Road Segmentation ADAS FP16 - CPU
  Face Detection FP16 - CPU
  Face Detection FP16 - CPU
oneDNN
VVenC
libavif avifenc
Embree
NCNN
VVenC
OpenVINO
VVenC
OpenVINO
NCNN
oneDNN:
  Recurrent Neural Network Inference - f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
OpenRadioss:
  Cell Phone Drop Test
  INIVOL and Fluid Structure Interaction Drop Container
oneDNN
Embree
OpenVINO
NCNN
OpenVINO:
  Weld Porosity Detection FP16-INT8 - CPU
  Vehicle Detection FP16-INT8 - CPU
Embree
OpenVINO
NCNN
oneDNN
Embree
NCNN
OpenRadioss
Timed GCC Compilation
easyWave
oneDNN
OpenVINO
oneDNN:
  Convolution Batch Shapes Auto - f32 - CPU
  IP Shapes 3D - f32 - CPU
easyWave:
  e2Asean Grid + BengkuluSept2007 Source - 240
  e2Asean Grid + BengkuluSept2007 Source - 1200
Embree
oneDNN
OpenVINO
Embree
OpenVINO:
  Age Gender Recognition Retail 0013 FP16-INT8 - CPU
  Age Gender Recognition Retail 0013 FP16 - CPU
  Face Detection Retail FP16-INT8 - CPU
OpenVKL
Intel Open Image Denoise:
  RTLightmap.hdr.4096x4096 - CPU-Only
  RT.ldr_alb_nrm.3840x2160 - CPU-Only
  RT.hdr_alb_nrm.3840x2160 - CPU-Only