AMD Ryzen 5 5500U testing with a NB01 NL5xNU (1.07.11RTR1 BIOS) and AMD Lucienne 512MB on Tuxedo 22.04 via the Phoronix Test Suite.
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate ondemand (Boost: Enabled) - CPU Microcode: 0x8608103Java Notes: OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu222.04)Python Notes: Python 3.10.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
b c Processor: AMD Ryzen 5 5500U @ 4.06GHz (6 Cores / 12 Threads), Motherboard: NB01 NL5xNU (1.07.11RTR1 BIOS), Chipset: AMD Renoir/Cezanne, Memory: 16GB, Disk: Samsung SSD 970 EVO Plus 500GB, Graphics: AMD Lucienne 512MB (1800/400MHz), Audio: AMD Renoir Radeon HD Audio, Network: Realtek RTL8111/8168/8411 + Intel Wi-Fi 6 AX200
OS: Tuxedo 22.04, Kernel: 6.0.0-1010-oem (x86_64), Desktop: KDE Plasma 5.26.5, Display Server: X Server 1.21.1.3, OpenGL: 4.6 Mesa 22.3.7 (LLVM 14.0.0 DRM 3.48), Vulkan: 1.3.230, Compiler: GCC 11.3.0, File-System: ext4, Screen Resolution: 1920x1080
fhe OpenBenchmarking.org Phoronix Test Suite AMD Ryzen 5 5500U @ 4.06GHz (6 Cores / 12 Threads) NB01 NL5xNU (1.07.11RTR1 BIOS) AMD Renoir/Cezanne 16GB Samsung SSD 970 EVO Plus 500GB AMD Lucienne 512MB (1800/400MHz) AMD Renoir Radeon HD Audio Realtek RTL8111/8168/8411 + Intel Wi-Fi 6 AX200 Tuxedo 22.04 6.0.0-1010-oem (x86_64) KDE Plasma 5.26.5 X Server 1.21.1.3 4.6 Mesa 22.3.7 (LLVM 14.0.0 DRM 3.48) 1.3.230 GCC 11.3.0 ext4 1920x1080 Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution Fhe Benchmarks System Logs - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: amd-pstate ondemand (Boost: Enabled) - CPU Microcode: 0x8608103 - OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu222.04) - Python 3.10.6 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
a b c Result Overview Phoronix Test Suite 100% 101% 103% 104% 106% DaCapo Benchmark Intel Open Image Denoise Apache Spark TPC-H oneDNN Embree C-Blosc Java SciMark CloverLeaf Redis 7.0.12 + memtier_benchmark Timed Gem5 Compilation Cpuminer-Opt OpenVKL easyWave Xmrig Blender VVenC PyTorch SVT-AV1 Timed FFmpeg Compilation OpenRadioss Apache Cassandra ScyllaDB libavif avifenc FFmpeg Stress-NG NCNN BRL-CAD OpenVINO Neural Magic DeepSparse QuantLib WebP2 Image Encode Timed GCC Compilation OSPRay Studio
fhe stress-ng: Hash stress-ng: MMAP stress-ng: NUMA stress-ng: Pipe stress-ng: Poll stress-ng: Zlib stress-ng: Futex stress-ng: MEMFD stress-ng: Mutex stress-ng: Atomic stress-ng: Crypto stress-ng: Malloc stress-ng: Cloning stress-ng: Forking stress-ng: Pthread stress-ng: AVL Tree stress-ng: IO_uring stress-ng: SENDFILE stress-ng: CPU Cache stress-ng: CPU Stress stress-ng: Semaphores stress-ng: Matrix Math stress-ng: Vector Math stress-ng: AVX-512 VNNI stress-ng: Function Call stress-ng: x86_64 RdRand stress-ng: Floating Point stress-ng: Matrix 3D Math stress-ng: Memory Copying stress-ng: Vector Shuffle stress-ng: Mixed Scheduler stress-ng: Socket Activity stress-ng: Wide Vector Math stress-ng: Context Switching stress-ng: Fused Multiply-Add stress-ng: Vector Floating Point stress-ng: Glibc C String Functions stress-ng: Glibc Qsort Data Sorting stress-ng: System V Message Passing dacapobench: Jython dacapobench: Eclipse dacapobench: GraphChi dacapobench: Tradesoap dacapobench: Tradebeans dacapobench: Spring Boot dacapobench: Apache Kafka dacapobench: Apache Tomcat dacapobench: jMonkeyEngine dacapobench: Apache Cassandra dacapobench: Apache Xalan XSLT dacapobench: Batik SVG Toolkit dacapobench: H2 Database Engine dacapobench: FOP Print Formatter easywave: e2Asean Grid + BengkuluSept2007 Source - 1200 dacapobench: PMD Source Code Analyzer dacapobench: Apache Lucene Search Index easywave: e2Asean Grid + BengkuluSept2007 Source - 240 dacapobench: Apache Lucene Search Engine dacapobench: Avrora AVR Simulation Framework dacapobench: BioJava Biological Data Framework dacapobench: Zxing 1D/2D Barcode Image Processing dacapobench: H2O In-Memory Platform For Machine Learning java-scimark2: Composite java-scimark2: Monte Carlo java-scimark2: Fast Fourier Transform java-scimark2: Sparse Matrix Multiply java-scimark2: Dense LU Matrix Factorization java-scimark2: Jacobi Successive Over-Relaxation brl-cad: VGR Performance Metric webp2: Default webp2: Quality 75, Compression Effort 7 webp2: Quality 95, Compression Effort 7 webp2: Quality 100, Compression Effort 5 webp2: Quality 100, Lossless Compression xmrig: KawPow - 1M xmrig: Monero - 1M xmrig: Wownero - 1M xmrig: GhostRider - 1M xmrig: CryptoNight-Heavy - 1M xmrig: CryptoNight-Femto UPX2 - 1M quantlib: Multi-Threaded quantlib: Single-Threaded openradioss: Bumper Beam openradioss: Chrysler Neon 1M openradioss: Cell Phone Drop Test openradioss: Bird Strike on Windshield openradioss: Rubber O-Ring Seal Installation openradioss: INIVOL and Fluid Structure Interaction Drop Container cloverleaf: clover_bm cloverleaf: clover_bm64_short deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream pytorch: CPU - 1 - ResNet-50 pytorch: CPU - 1 - ResNet-152 pytorch: CPU - 16 - ResNet-50 pytorch: CPU - 32 - ResNet-50 pytorch: CPU - 64 - ResNet-50 pytorch: CPU - 16 - ResNet-152 pytorch: CPU - 32 - ResNet-152 pytorch: CPU - 64 - ResNet-152 pytorch: CPU - 1 - Efficientnet_v2_l pytorch: CPU - 16 - Efficientnet_v2_l pytorch: CPU - 32 - Efficientnet_v2_l pytorch: CPU - 64 - Efficientnet_v2_l ncnn: CPU - mobilenet ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU - shufflenet-v2 ncnn: CPU - mnasnet ncnn: CPU - efficientnet-b0 ncnn: CPU - blazeface ncnn: CPU - googlenet ncnn: CPU - vgg16 ncnn: CPU - resnet18 ncnn: CPU - alexnet ncnn: CPU - resnet50 ncnn: CPU - yolov4-tiny ncnn: CPU - squeezenet_ssd ncnn: CPU - regnety_400m ncnn: CPU - vision_transformer ncnn: CPU - FastestDet ncnn: Vulkan GPU - mobilenet ncnn: Vulkan GPU-v2-v2 - mobilenet-v2 ncnn: Vulkan GPU-v3-v3 - mobilenet-v3 ncnn: Vulkan GPU - shufflenet-v2 ncnn: Vulkan GPU - mnasnet ncnn: Vulkan GPU - efficientnet-b0 ncnn: Vulkan GPU - blazeface ncnn: Vulkan GPU - googlenet ncnn: Vulkan GPU - vgg16 ncnn: Vulkan GPU - resnet18 ncnn: Vulkan GPU - alexnet ncnn: Vulkan GPU - resnet50 ncnn: Vulkan GPU - yolov4-tiny ncnn: Vulkan GPU - squeezenet_ssd ncnn: Vulkan GPU - regnety_400m ncnn: Vulkan GPU - vision_transformer ncnn: Vulkan GPU - FastestDet onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU openvino: Face Detection FP16 - CPU openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection Retail FP16 - CPU openvino: Face Detection Retail FP16 - CPU openvino: Road Segmentation ADAS FP16 - CPU openvino: Road Segmentation ADAS FP16 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Face Detection Retail FP16-INT8 - CPU openvino: Face Detection Retail FP16-INT8 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Handwritten English Recognition FP16 - CPU openvino: Handwritten English Recognition FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU build-ffmpeg: Time To Compile cpuminer-opt: Magi cpuminer-opt: scrypt cpuminer-opt: Deepcoin cpuminer-opt: Ringcoin cpuminer-opt: Blake-2 S cpuminer-opt: Garlicoin cpuminer-opt: Skeincoin cpuminer-opt: Myriad-Groestl cpuminer-opt: LBC, LBRY Credits cpuminer-opt: Quad SHA-256, Pyrite cpuminer-opt: Triple SHA-256, Onecoin build-gcc: Time To Compile svt-av1: Preset 4 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 13 - Bosphorus 4K svt-av1: Preset 4 - Bosphorus 1080p svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 12 - Bosphorus 1080p svt-av1: Preset 13 - Bosphorus 1080p blender: BMW27 - CPU-Only blender: Classroom - CPU-Only blender: Fishy Cat - CPU-Only blender: Barbershop - CPU-Only blender: Pabellon Barcelona - CPU-Only ffmpeg: libx264 - Live ffmpeg: libx265 - Live ffmpeg: libx264 - Upload ffmpeg: libx265 - Upload ffmpeg: libx264 - Platform ffmpeg: libx265 - Platform ffmpeg: libx264 - Video On Demand ffmpeg: libx265 - Video On Demand vvenc: Bosphorus 4K - Fast vvenc: Bosphorus 4K - Faster vvenc: Bosphorus 1080p - Fast vvenc: Bosphorus 1080p - Faster avifenc: 0 avifenc: 2 avifenc: 6 avifenc: 6, Lossless avifenc: 10, Lossless embree: Pathtracer - Crown embree: Pathtracer ISPC - Crown embree: Pathtracer - Asian Dragon embree: Pathtracer - Asian Dragon Obj embree: Pathtracer ISPC - Asian Dragon embree: Pathtracer ISPC - Asian Dragon Obj oidn: RT.hdr_alb_nrm.3840x2160 - CPU-Only oidn: RT.ldr_alb_nrm.3840x2160 - CPU-Only oidn: RTLightmap.hdr.4096x4096 - CPU-Only openvkl: vklBenchmarkCPU ISPC openvkl: vklBenchmarkCPU Scalar ospray-studio: 1 - 4K - 1 - Path Tracer - CPU ospray-studio: 2 - 4K - 1 - Path Tracer - CPU ospray-studio: 3 - 4K - 1 - Path Tracer - CPU ospray-studio: 1 - 4K - 16 - Path Tracer - CPU ospray-studio: 1 - 4K - 32 - Path Tracer - CPU ospray-studio: 2 - 4K - 16 - Path Tracer - CPU ospray-studio: 2 - 4K - 32 - Path Tracer - CPU ospray-studio: 3 - 4K - 16 - Path Tracer - CPU ospray-studio: 3 - 4K - 32 - Path Tracer - CPU ospray-studio: 1 - 1080p - 1 - Path Tracer - CPU ospray-studio: 2 - 1080p - 1 - Path Tracer - CPU ospray-studio: 3 - 1080p - 1 - Path Tracer - CPU ospray-studio: 1 - 1080p - 16 - Path Tracer - CPU ospray-studio: 1 - 1080p - 32 - Path Tracer - CPU ospray-studio: 2 - 1080p - 16 - Path Tracer - CPU ospray-studio: 2 - 1080p - 32 - Path Tracer - CPU ospray-studio: 3 - 1080p - 16 - Path Tracer - CPU ospray-studio: 3 - 1080p - 32 - Path Tracer - CPU build-gem5: Time To Compile blosc: blosclz shuffle - 8MB blosc: blosclz noshuffle - 8MB blosc: blosclz bitshuffle - 8MB spark-tpch: 1 - Geometric Mean Of All Queries scylladb: Writes memtier-benchmark: Redis - 50 - 1:5 memtier-benchmark: Redis - 100 - 1:5 memtier-benchmark: Redis - 50 - 1:10 memtier-benchmark: Redis - 100 - 1:10 cassandra: Writes spark-tpch: 1 - Q01 spark-tpch: 1 - Q02 spark-tpch: 1 - Q03 spark-tpch: 1 - Q04 spark-tpch: 1 - Q05 spark-tpch: 1 - Q06 spark-tpch: 1 - Q07 spark-tpch: 1 - Q08 spark-tpch: 1 - Q09 spark-tpch: 1 - Q10 spark-tpch: 1 - Q11 spark-tpch: 1 - Q12 spark-tpch: 1 - Q13 spark-tpch: 1 - Q14 spark-tpch: 1 - Q15 spark-tpch: 1 - Q16 spark-tpch: 1 - Q17 spark-tpch: 1 - Q18 spark-tpch: 1 - Q19 spark-tpch: 1 - Q20 spark-tpch: 1 - Q21 spark-tpch: 1 - Q22 a b c 1369137.05 108.29 107.83 3369398.6 699249.96 714.26 2056701.46 260.26 2874216.79 459 13443.74 2857973.82 805.43 24389.05 87982.48 36.2 154334.52 97345.17 1567966.45 13802.28 12140043.03 31801.4 38176.43 239957.63 4081.42 6663.75 1929.23 655.73 1903.46 4032.3 5998.56 4105.68 246414.84 2044079.46 5581744.4 16476.3 5307606.72 150.19 6906619.23 6988 14465 4536 11662 14871 4080 6090 11039 6958 9689 1471 1656 5448 1056 337.427 3180 5266 19.019 4925 14953 7853 2241 5898 1725.88 1535.01 236.1 2151.26 3008.93 1698.07 82277 4.21 0.07 0.03 2.06 0.01 1705.2 1706.3 2532.3 248.8 1711.3 1708.3 17332.3 2767.1 343.31 2140.54 203.69 563.89 457.85 1282.01 189.24 357.59 4.5966 652.2958 4.4318 225.628 105.8397 28.3037 89.2555 11.1954 52.7471 56.8275 47.8925 20.8692 322.1667 9.2883 226.5659 4.4022 25.7166 116.539 23.5221 42.4995 5.756 518.3032 5.1875 192.7602 52.1174 57.5055 47.764 20.9258 25.7292 116.5721 23.8339 41.9473 41.7532 71.7605 38.4017 26.0312 4.8211 616.5973 4.9083 203.7219 53.1914 56.3421 40.0862 24.9352 4.5918 649.7001 4.4527 224.5697 20.28 8.78 12.06 11.87 11.10 5.19 5.24 5.20 5.51 3.49 3.52 3.47 21.69 5.91 4.59 4.34 4.56 8.56 1.43 16.22 71.21 11.89 9.18 28.6 32.39 15.39 9.56 152.52 5.01 22.07 5.84 4.6 4.31 4.51 8.54 1.44 16.33 70.93 11.74 9.19 28.38 32.34 15.4 9.52 152.59 5.1 8.88666 10.6078 22.0491 12.0652 11.2806 5852.6 3525.96 1.46 2709.93 15.26 261.73 15.03 265.83 109.51 36.48 1.72 2292.52 383.62 10.39 24.46 163.39 142.7 28 139.15 28.72 451.52 8.84 70.91 56.36 17.64 226.5 192.2 31.2 195.63 20.42 49.86 120.25 3262.61 1.81 52.13 115.05 4919.46 1.19 89.934 160.75 54.02 1836.39 809.38 34010 699.76 8334.91 2685.02 3717.99 12550 17960 1914.169 1.992 15.795 55.755 57.997 7.407 48.995 210.845 262.016 274.06 741.28 352.65 2872.4 926.75 166.56 59.71 10.99 12.45 40.87 25.65 40.93 25.62 2.326 4.944 7.895 17.196 253.543 116.484 12.422 18.163 7.764 6.2661 5.5717 7.1256 6.8798 7.0973 6.1524 0.23 0.23 0.11 103 56 34796 35203 39763 455472 907066 465057 922564 537531 1070452 7141 7187 8355 119816 232873 121353 236620 140298 274754 1036.06 7200.8 5092.9 7005.6 2.40013873 48462 1762546.44 1749374.05 1746685.71 1767574.95 43060 4.986022 2.69581914 3.14664507 2.95742679 3.34390712 1.32001436 2.90585136 2.58961225 4.39579391 2.77244282 1.40807271 2.25927687 1.70204818 1.66763389 1.74446428 1.4960829 3.62711215 4.20644236 1.56050658 2.41372919 9.0730505 1.30299354 1364579.5 107.12 110.72 3503830.27 698234.26 717.65 2147335.67 258.68 2878214.99 459.14 13458.67 2856309.44 800.44 24288.06 88920.64 36.13 154554.5 99090.8 1533029.06 13473.64 11087929.7 31958.65 37936.85 239843.62 4065.44 6665.26 1936.01 652.4 1882.01 4031.9 6043.57 4096.52 240673.83 2025241.43 5561152.6 16206.61 5411002.92 150.52 6910876.85 6838 14540 4492 11474 14598 4069 6108 11070 6962 9610 1296 1577 5525 1040 337.548 3456 5113 18.968 4937 15033 7820 2343 6000 1727.95 1593.68 251.73 2117.34 2980.35 1696.66 82118 4.20 0.07 0.03 2.07 0.01 1726.1 1726.7 2513.9 249 1724 1725.2 17273.4 2769.2 345.17 2140.08 203.23 563.74 457.97 1293.89 187.02 354.08 4.5404 655.2988 4.4256 225.9486 105.2121 28.4701 89.1832 11.2042 52.2794 57.3565 47.5982 20.9979 318.7207 9.3884 224.9789 4.4328 25.5056 117.5916 23.3994 42.722 5.7259 521.4212 5.1603 193.7736 52.2735 57.3613 47.8651 20.8808 26.13 114.7826 23.8804 41.866 41.8304 71.684 38.5047 25.9614 4.8971 611.9244 4.9127 203.5386 53.1658 56.3774 40.0255 24.9725 4.5746 653.1471 4.4513 224.6436 20.72 8.82 11.91 11.80 11.67 5.24 5.20 5.24 5.51 3.47 3.48 3.52 21.66 5.9 4.62 4.5 4.49 8.44 1.42 16.34 71.5 11.8 9.11 28.5 32 15.28 9.59 151.15 5.02 21.52 5.98 4.68 4.34 4.53 8.56 1.44 16.29 71.16 11.94 9.18 28.65 32.25 15.27 9.63 150.73 5.11 8.88002 11.8178 22.1566 11.9343 11.4455 5868.63 3516.72 1.46 2709.89 15.23 262.39 15.2 262.86 109.56 36.46 1.73 2273.39 384.37 10.37 24.47 163.32 142.37 28.07 138.63 28.82 452.11 8.83 70.38 56.78 17.57 227.42 193.34 31.01 195.6 20.42 49.72 120.56 3271.38 1.8 52.91 113.32 4992.2 1.18 90.466 160.98 54.3 1838.98 857.49 34100 698.16 8349.03 2827.68 3697.25 12550 17950 1919.534 1.999 15.854 55.865 58.334 7.466 49.296 212.371 265.763 273.88 740.03 353.84 2875.21 926.69 165.23 59.30 10.95 12.44 40.82 25.64 40.83 25.64 2.307 4.981 7.793 17.163 252.633 117.326 12.378 18.107 7.761 6.1701 5.6423 7.5922 6.8673 7.1178 6.1496 0.23 0.23 0.12 104 55 34945 35162 39772 457540 910737 464319 923928 539773 1069418 7122 7200 8351 119397 233029 121476 236145 139878 274126 1044.611 7309.5 5107.4 6985 2.43833726 48572 1729895.56 1719244.18 1734539.88 1762578.3 42968 5.20342541 2.76885891 3.49644971 2.87153316 3.21631765 1.39413798 2.90909719 2.55267692 4.36701632 2.79987454 1.28839672 2.20338345 1.56997681 1.66164672 1.77919817 1.51768637 3.619349 4.26704741 1.77942574 2.54700851 9.54779816 1.53510463 1366996.53 107.46 108.96 3501532.41 700345.61 718.75 2102668.91 257.48 2889633.43 458.91 13364.44 2858628.37 799.56 24216.78 89239.52 36.6 155512.74 98736.21 1539578.22 13732.88 11469661.56 32000.11 38241.24 240648.24 4082.69 6664.61 1949.12 650.2 1897.68 4036.74 6149.71 4108.67 241097.03 2036534.78 5947777.71 16263.83 5317094 150.56 6891049.07 6895 14716 4472 12316 14898 4034 6097 11060 6959 9550 1305 1603 5563 985 337.546 3327 5225 19.314 5314 14948 7857 2255 6103 1708.61 1530.09 247.29 2122.28 2945.78 1697.6 82304 4.24 0.07 0.03 2.07 0.01 1725.8 1724.1 2564.5 248.4 1723.8 1724.4 17326.6 2767.4 345.57 2137.08 199.58 556.31 455.42 1274.1 188.40 354.29 4.5764 653.4607 4.4369 225.3706 104.8813 28.5625 89.4012 11.1773 52.6821 56.9204 47.9004 20.8656 320.0604 9.3491 225.1522 4.4295 25.4802 117.6689 23.5907 42.3759 5.7625 516.6462 5.1904 192.6513 52.0818 57.5181 47.8078 20.9066 25.6922 116.6805 23.8414 41.934 41.8782 71.5417 38.467 25.9868 4.9405 606.3819 4.9075 203.7566 53.2844 56.2353 40.0783 24.9399 4.595 649.5683 4.4543 224.4901 20.82 8.75 11.89 11.93 11.84 5.20 5.25 5.23 5.47 3.49 3.50 3.48 21.73 5.85 4.65 4.46 4.51 8.48 1.47 16.37 71.2 11.81 9.17 28.54 32.21 15.49 9.63 150.53 5.12 21.87 5.98 4.69 4.34 4.43 8.92 1.44 16.51 71.27 11.82 9.21 28.73 32.31 15.09 9.54 150.67 5.05 8.97203 11.9401 22.0731 11.7012 11.4127 5872.67 3510.03 1.47 2705.55 15.23 262.13 15.14 264.07 109.18 36.59 1.72 2301.99 384.06 10.38 24.41 163.78 143.25 27.9 136.89 29.2 454.14 8.79 71.78 55.67 17.56 227.68 193.23 31.03 195.76 20.41 50.23 119.36 3265.89 1.81 53.07 113 4942.11 1.19 90.046 161.21 54.67 1841.83 815.83 34180 702.2 8361.85 2704.56 3704.54 12530 17950 1917.264 2.007 15.862 55.863 58.087 7.47 49.198 213.505 258.675 273.54 737.06 350.61 2871.44 925.22 165.15 60.13 10.96 12.52 40.87 25.75 40.98 25.79 2.328 5.007 7.909 17.131 251.242 117.281 12.397 18.015 7.686 6.2381 5.6536 7.6511 6.8562 7.1822 6.1881 0.23 0.23 0.12 104 56 34885 35157 39774 456087 905206 463290 921412 537704 1070341 7091 7178 8335 119077 233165 120927 235545 140057 273323 1035.725 7320.8 5119.1 7147 2.38360943 48293 1743565.71 1735403.81 1763427.67 1775067.44 42804 5.06900406 3.07445645 3.21106935 2.76678443 3.14652157 1.39161527 2.90456581 2.65772152 4.18974257 2.88941693 1.30776501 2.18155408 1.50164175 1.67524362 1.769907 1.44492042 3.5092802 4.27269268 1.62940145 2.36286831 8.8638401 1.50756407 OpenBenchmarking.org
easyWave The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 a b c 70 140 210 280 350 337.43 337.55 337.55 1. (CXX) g++ options: -O3 -fopenmp
easyWave The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 a b c 5 10 15 20 25 19.02 18.97 19.31 1. (CXX) g++ options: -O3 -fopenmp
Java SciMark This test runs the Java version of SciMark 2, which is a benchmark for scientific and numerical computing developed by programmers at the National Institute of Standards and Technology. This benchmark is made up of Fast Foruier Transform, Jacobi Successive Over-relaxation, Monte Carlo, Sparse Matrix Multiply, and dense LU matrix factorization benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mflops, More Is Better Java SciMark 2.2 Computational Test: Composite a b c 400 800 1200 1600 2000 1725.88 1727.95 1708.61
BRL-CAD BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.36 VGR Performance Metric a b c 20K 40K 60K 80K 100K 82277 82118 82304 1. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6
WebP2 Image Encode This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Default a b c 0.954 1.908 2.862 3.816 4.77 4.21 4.20 4.24 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
Xmrig Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: KawPow - Hash Count: 1M a b c 400 800 1200 1600 2000 1705.2 1726.1 1725.8 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: Monero - Hash Count: 1M a b c 400 800 1200 1600 2000 1706.3 1726.7 1724.1 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: Wownero - Hash Count: 1M a b c 600 1200 1800 2400 3000 2532.3 2513.9 2564.5 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: GhostRider - Hash Count: 1M a b c 50 100 150 200 250 248.8 249.0 248.4 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: CryptoNight-Heavy - Hash Count: 1M a b c 400 800 1200 1600 2000 1711.3 1724.0 1723.8 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: CryptoNight-Femto UPX2 - Hash Count: 1M a b c 400 800 1200 1600 2000 1708.3 1725.2 1724.4 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
QuantLib QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.32 Configuration: Multi-Threaded a b c 4K 8K 12K 16K 20K 17332.3 17273.4 17326.6 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.32 Configuration: Single-Threaded a b c 600 1200 1800 2400 3000 2767.1 2769.2 2767.4 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenRadioss OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2023.09.15 Model: Bumper Beam a b c 80 160 240 320 400 343.31 345.17 345.57
OpenBenchmarking.org Seconds, Fewer Is Better CloverLeaf 1.3 Input: clover_bm64_short a b c 80 160 240 320 400 357.59 354.08 354.29 1. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
PyTorch OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 a b c 5 10 15 20 25 20.28 20.72 20.82 MIN: 17 / MAX: 22.8 MIN: 16.36 / MAX: 23.54 MIN: 17.2 / MAX: 23.53
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 a b c 2 4 6 8 10 8.78 8.82 8.75 MIN: 6.78 / MAX: 9.95 MIN: 7.02 / MAX: 10.05 MIN: 7.08 / MAX: 9.99
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 a b c 3 6 9 12 15 12.06 11.91 11.89 MIN: 9.84 / MAX: 13.85 MIN: 10.01 / MAX: 13.53 MIN: 10.04 / MAX: 13.47
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 a b c 3 6 9 12 15 11.87 11.80 11.93 MIN: 9.95 / MAX: 13.33 MIN: 9.72 / MAX: 13.59 MIN: 9.9 / MAX: 13.75
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 a b c 3 6 9 12 15 11.10 11.67 11.84 MIN: 9.27 / MAX: 12.44 MIN: 9.24 / MAX: 13.18 MIN: 9.81 / MAX: 13.3
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 a b c 1.179 2.358 3.537 4.716 5.895 5.19 5.24 5.20 MIN: 4.28 / MAX: 5.82 MIN: 4.23 / MAX: 5.83 MIN: 4.2 / MAX: 5.88
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 a b c 1.1813 2.3626 3.5439 4.7252 5.9065 5.24 5.20 5.25 MIN: 4.14 / MAX: 5.89 MIN: 4.3 / MAX: 5.83 MIN: 4.31 / MAX: 5.89
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 a b c 1.179 2.358 3.537 4.716 5.895 5.20 5.24 5.23 MIN: 4.15 / MAX: 5.77 MIN: 4.28 / MAX: 5.84 MIN: 4.22 / MAX: 5.85
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l a b c 1.2398 2.4796 3.7194 4.9592 6.199 5.51 5.51 5.47 MIN: 4.63 / MAX: 5.95 MIN: 4.61 / MAX: 6.06 MIN: 4.63 / MAX: 6.05
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l a b c 0.7853 1.5706 2.3559 3.1412 3.9265 3.49 3.47 3.49 MIN: 2.86 / MAX: 3.79 MIN: 2.91 / MAX: 3.79 MIN: 2.92 / MAX: 3.88
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l a b c 0.792 1.584 2.376 3.168 3.96 3.52 3.48 3.50 MIN: 2.9 / MAX: 3.86 MIN: 2.9 / MAX: 3.78 MIN: 2.92 / MAX: 3.8
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l a b c 0.792 1.584 2.376 3.168 3.96 3.47 3.52 3.48 MIN: 2.92 / MAX: 3.91 MIN: 2.89 / MAX: 3.88 MIN: 2.92 / MAX: 3.82
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 a b c 1.3298 2.6596 3.9894 5.3192 6.649 5.91 5.90 5.85 MIN: 5.61 / MAX: 13.79 MIN: 5.59 / MAX: 12.01 MIN: 5.62 / MAX: 8.37 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 a b c 1.0463 2.0926 3.1389 4.1852 5.2315 4.59 4.62 4.65 MIN: 4.44 / MAX: 6.6 MIN: 4.46 / MAX: 6.86 MIN: 4.41 / MAX: 8.93 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: shufflenet-v2 a b c 1.0125 2.025 3.0375 4.05 5.0625 4.34 4.50 4.46 MIN: 4.18 / MAX: 7.26 MIN: 4.23 / MAX: 32.01 MIN: 4.17 / MAX: 7.15 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: mnasnet a b c 1.026 2.052 3.078 4.104 5.13 4.56 4.49 4.51 MIN: 4.31 / MAX: 10.03 MIN: 4.35 / MAX: 6.6 MIN: 4.31 / MAX: 6.83 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: efficientnet-b0 a b c 2 4 6 8 10 8.56 8.44 8.48 MIN: 8.24 / MAX: 15.56 MIN: 8.16 / MAX: 11.8 MIN: 8.19 / MAX: 11.92 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: blazeface a b c 0.3308 0.6616 0.9924 1.3232 1.654 1.43 1.42 1.47 MIN: 1.38 / MAX: 1.65 MIN: 1.38 / MAX: 2.05 MIN: 1.4 / MAX: 2.58 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: googlenet a b c 4 8 12 16 20 16.22 16.34 16.37 MIN: 15.74 / MAX: 23.62 MIN: 15.74 / MAX: 25.53 MIN: 15.93 / MAX: 21.8 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vgg16 a b c 16 32 48 64 80 71.21 71.50 71.20 MIN: 69.71 / MAX: 119.51 MIN: 69.77 / MAX: 94.55 MIN: 69.67 / MAX: 81.34 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet18 a b c 3 6 9 12 15 11.89 11.80 11.81 MIN: 11.52 / MAX: 18.09 MIN: 11.48 / MAX: 20.39 MIN: 11.46 / MAX: 17.5 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: alexnet a b c 3 6 9 12 15 9.18 9.11 9.17 MIN: 8.91 / MAX: 16.93 MIN: 8.86 / MAX: 18.25 MIN: 8.9 / MAX: 16.02 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet50 a b c 7 14 21 28 35 28.60 28.50 28.54 MIN: 27.3 / MAX: 36.9 MIN: 26.91 / MAX: 36.74 MIN: 26.96 / MAX: 39.01 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: yolov4-tiny a b c 8 16 24 32 40 32.39 32.00 32.21 MIN: 31.6 / MAX: 38.22 MIN: 31.34 / MAX: 41.92 MIN: 31.5 / MAX: 42.17 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: squeezenet_ssd a b c 4 8 12 16 20 15.39 15.28 15.49 MIN: 14.89 / MAX: 31.68 MIN: 14.79 / MAX: 22.91 MIN: 14.8 / MAX: 34.89 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: regnety_400m a b c 3 6 9 12 15 9.56 9.59 9.63 MIN: 9.28 / MAX: 16.82 MIN: 9.28 / MAX: 14.12 MIN: 9.32 / MAX: 12.95 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vision_transformer a b c 30 60 90 120 150 152.52 151.15 150.53 MIN: 149.07 / MAX: 186.85 MIN: 147.31 / MAX: 195.77 MIN: 147.14 / MAX: 193.7 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: FastestDet a b c 1.152 2.304 3.456 4.608 5.76 5.01 5.02 5.12 MIN: 4.86 / MAX: 7.02 MIN: 4.86 / MAX: 6.77 MIN: 5.01 / MAX: 7.19 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: mobilenet a b c 5 10 15 20 25 22.07 21.52 21.87 MIN: 21.39 / MAX: 54.54 MIN: 20.87 / MAX: 32.66 MIN: 20.95 / MAX: 68.72 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 a b c 1.3455 2.691 4.0365 5.382 6.7275 5.84 5.98 5.98 MIN: 5.6 / MAX: 8.12 MIN: 5.65 / MAX: 12.76 MIN: 5.6 / MAX: 12.26 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 a b c 1.0553 2.1106 3.1659 4.2212 5.2765 4.60 4.68 4.69 MIN: 4.41 / MAX: 10.59 MIN: 4.47 / MAX: 17.72 MIN: 4.44 / MAX: 18.17 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: shufflenet-v2 a b c 0.9765 1.953 2.9295 3.906 4.8825 4.31 4.34 4.34 MIN: 4.18 / MAX: 6.35 MIN: 4.19 / MAX: 11.27 MIN: 4.17 / MAX: 9.88 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: mnasnet a b c 1.0193 2.0386 3.0579 4.0772 5.0965 4.51 4.53 4.43 MIN: 4.3 / MAX: 12.69 MIN: 4.34 / MAX: 6.59 MIN: 4.27 / MAX: 6.68 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: efficientnet-b0 a b c 2 4 6 8 10 8.54 8.56 8.92 MIN: 8.18 / MAX: 14.64 MIN: 8.27 / MAX: 16.23 MIN: 8.23 / MAX: 96.07 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: blazeface a b c 0.324 0.648 0.972 1.296 1.62 1.44 1.44 1.44 MIN: 1.4 / MAX: 1.6 MIN: 1.39 / MAX: 1.55 MIN: 1.38 / MAX: 1.55 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: googlenet a b c 4 8 12 16 20 16.33 16.29 16.51 MIN: 15.7 / MAX: 24.12 MIN: 15.71 / MAX: 22.39 MIN: 15.79 / MAX: 22.62 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: vgg16 a b c 16 32 48 64 80 70.93 71.16 71.27 MIN: 69.58 / MAX: 79.76 MIN: 69.68 / MAX: 111.48 MIN: 69.92 / MAX: 80.28 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: resnet18 a b c 3 6 9 12 15 11.74 11.94 11.82 MIN: 11.45 / MAX: 18.54 MIN: 11.32 / MAX: 60.89 MIN: 11.46 / MAX: 18.3 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: alexnet a b c 3 6 9 12 15 9.19 9.18 9.21 MIN: 8.87 / MAX: 12.08 MIN: 8.82 / MAX: 16.24 MIN: 8.84 / MAX: 17.28 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: resnet50 a b c 7 14 21 28 35 28.38 28.65 28.73 MIN: 27.04 / MAX: 35.23 MIN: 26.91 / MAX: 36.74 MIN: 26.98 / MAX: 77.04 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: yolov4-tiny a b c 8 16 24 32 40 32.34 32.25 32.31 MIN: 31.72 / MAX: 37.86 MIN: 31.59 / MAX: 40.48 MIN: 31.47 / MAX: 40.92 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: squeezenet_ssd a b c 4 8 12 16 20 15.40 15.27 15.09 MIN: 14.87 / MAX: 28.18 MIN: 14.83 / MAX: 21.07 MIN: 14.67 / MAX: 21.1 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: regnety_400m a b c 3 6 9 12 15 9.52 9.63 9.54 MIN: 9.26 / MAX: 15.11 MIN: 9.28 / MAX: 14.8 MIN: 9.27 / MAX: 13.76 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: vision_transformer a b c 30 60 90 120 150 152.59 150.73 150.67 MIN: 148.63 / MAX: 239.62 MIN: 147.59 / MAX: 166.06 MIN: 147.18 / MAX: 169.34 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: FastestDet a b c 1.1498 2.2996 3.4494 4.5992 5.749 5.10 5.11 5.05 MIN: 4.99 / MAX: 6.93 MIN: 4.94 / MAX: 6.99 MIN: 4.93 / MAX: 7.03 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU a b c 3 6 9 12 15 8.88666 8.88002 8.97203 MIN: 8.48 MIN: 8.5 MIN: 8.58 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU a b c 3 6 9 12 15 10.61 11.82 11.94 MIN: 10.34 MIN: 11.43 MIN: 11.64 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU a b c 5 10 15 20 25 22.05 22.16 22.07 MIN: 21.6 MIN: 21.68 MIN: 21.59 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU a b c 3 6 9 12 15 12.07 11.93 11.70 MIN: 7.42 MIN: 7.67 MIN: 7.48 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU a b c 3 6 9 12 15 11.28 11.45 11.41 MIN: 10.65 MIN: 10.89 MIN: 10.79 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU a b c 1300 2600 3900 5200 6500 5852.60 5868.63 5872.67 MIN: 5725.89 MIN: 5739.43 MIN: 5729.99 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU a b c 800 1600 2400 3200 4000 3525.96 3516.72 3510.03 MIN: 3431.46 MIN: 3421.71 MIN: 3418.07 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU a b c 0.3308 0.6616 0.9924 1.3232 1.654 1.46 1.46 1.47 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU a b c 600 1200 1800 2400 3000 2709.93 2709.89 2705.55 MIN: 2079.4 / MAX: 2803.64 MIN: 2071.72 / MAX: 2802.51 MIN: 2050.1 / MAX: 2797.44 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU a b c 4 8 12 16 20 15.26 15.23 15.23 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU a b c 60 120 180 240 300 261.73 262.39 262.13 MIN: 211.82 / MAX: 297.88 MIN: 143.56 / MAX: 301.51 MIN: 196.09 / MAX: 303.22 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU a b c 4 8 12 16 20 15.03 15.20 15.14 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU a b c 60 120 180 240 300 265.83 262.86 264.07 MIN: 154.12 / MAX: 298.88 MIN: 232.22 / MAX: 299.62 MIN: 228.94 / MAX: 302.08 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU a b c 20 40 60 80 100 109.51 109.56 109.18 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU a b c 8 16 24 32 40 36.48 36.46 36.59 MIN: 22.64 / MAX: 65.82 MIN: 25.47 / MAX: 56.25 MIN: 18.14 / MAX: 60.02 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU a b c 0.3893 0.7786 1.1679 1.5572 1.9465 1.72 1.73 1.72 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU a b c 500 1000 1500 2000 2500 2292.52 2273.39 2301.99 MIN: 1831.17 / MAX: 2445.01 MIN: 1955.16 / MAX: 2430.88 MIN: 1851.05 / MAX: 2439.19 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU a b c 80 160 240 320 400 383.62 384.37 384.06 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU a b c 3 6 9 12 15 10.39 10.37 10.38 MIN: 5.19 / MAX: 22.54 MIN: 6.89 / MAX: 21.86 MIN: 5.14 / MAX: 21.27 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU a b c 6 12 18 24 30 24.46 24.47 24.41 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU a b c 40 80 120 160 200 163.39 163.32 163.78 MIN: 98.97 / MAX: 221.05 MIN: 76.65 / MAX: 208.06 MIN: 128.38 / MAX: 215.47 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU a b c 30 60 90 120 150 142.70 142.37 143.25 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU a b c 7 14 21 28 35 28.00 28.07 27.90 MIN: 17.95 / MAX: 41.21 MIN: 15.75 / MAX: 46.54 MIN: 20.57 / MAX: 43.21 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU a b c 30 60 90 120 150 139.15 138.63 136.89 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU a b c 7 14 21 28 35 28.72 28.82 29.20 MIN: 24.22 / MAX: 46.15 MIN: 24.57 / MAX: 45.98 MIN: 20.8 / MAX: 91.44 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU a b c 100 200 300 400 500 451.52 452.11 454.14 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU a b c 2 4 6 8 10 8.84 8.83 8.79 MIN: 6.3 / MAX: 17.28 MIN: 6 / MAX: 16.62 MIN: 5.75 / MAX: 20.16 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU a b c 16 32 48 64 80 70.91 70.38 71.78 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU a b c 13 26 39 52 65 56.36 56.78 55.67 MIN: 44.65 / MAX: 78.53 MIN: 33.46 / MAX: 74.61 MIN: 47.49 / MAX: 82.79 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU a b c 4 8 12 16 20 17.64 17.57 17.56 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU a b c 50 100 150 200 250 226.50 227.42 227.68 MIN: 142.3 / MAX: 257.79 MIN: 134.56 / MAX: 364.33 MIN: 164.95 / MAX: 261.54 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b c 40 80 120 160 200 192.20 193.34 193.23 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU a b c 7 14 21 28 35 31.20 31.01 31.03 MIN: 23.75 / MAX: 44.87 MIN: 24.54 / MAX: 43.5 MIN: 21.89 / MAX: 48.55 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU a b c 40 80 120 160 200 195.63 195.60 195.76 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU a b c 5 10 15 20 25 20.42 20.42 20.41 MIN: 12.43 / MAX: 70.22 MIN: 12.75 / MAX: 44.83 MIN: 15.23 / MAX: 35.17 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU a b c 11 22 33 44 55 49.86 49.72 50.23 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU a b c 30 60 90 120 150 120.25 120.56 119.36 MIN: 79.3 / MAX: 149.21 MIN: 91.75 / MAX: 152.9 MIN: 89.26 / MAX: 148.18 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b c 700 1400 2100 2800 3500 3262.61 3271.38 3265.89 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU a b c 0.4073 0.8146 1.2219 1.6292 2.0365 1.81 1.80 1.81 MIN: 1 / MAX: 10.22 MIN: 1.01 / MAX: 19.26 MIN: 0.98 / MAX: 11.34 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU a b c 12 24 36 48 60 52.13 52.91 53.07 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU a b c 30 60 90 120 150 115.05 113.32 113.00 MIN: 66.21 / MAX: 156.66 MIN: 87.68 / MAX: 144.49 MIN: 81.78 / MAX: 141.49 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b c 1100 2200 3300 4400 5500 4919.46 4992.20 4942.11 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU a b c 0.2678 0.5356 0.8034 1.0712 1.339 1.19 1.18 1.19 MIN: 0.55 / MAX: 10.47 MIN: 0.57 / MAX: 15.06 MIN: 0.57 / MAX: 13.65 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
Cpuminer-Opt Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Magi a b c 40 80 120 160 200 160.75 160.98 161.21 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Deepcoin a b c 400 800 1200 1600 2000 1836.39 1838.98 1841.83 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Myriad-Groestl a b c 600 1200 1800 2400 3000 2685.02 2827.68 2704.56 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: LBC, LBRY Credits a b c 800 1600 2400 3200 4000 3717.99 3697.25 3704.54 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Quad SHA-256, Pyrite a b c 3K 6K 9K 12K 15K 12550 12550 12530 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Triple SHA-256, Onecoin a b c 4K 8K 12K 16K 20K 17960 17950 17950 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
SVT-AV1 This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 4 - Input: Bosphorus 4K a b c 0.4516 0.9032 1.3548 1.8064 2.258 1.992 1.999 2.007 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 8 - Input: Bosphorus 4K a b c 4 8 12 16 20 15.80 15.85 15.86 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 12 - Input: Bosphorus 4K a b c 13 26 39 52 65 55.76 55.87 55.86 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 13 - Input: Bosphorus 4K a b c 13 26 39 52 65 58.00 58.33 58.09 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 4 - Input: Bosphorus 1080p a b c 2 4 6 8 10 7.407 7.466 7.470 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 8 - Input: Bosphorus 1080p a b c 11 22 33 44 55 49.00 49.30 49.20 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 12 - Input: Bosphorus 1080p a b c 50 100 150 200 250 210.85 212.37 213.51 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 13 - Input: Bosphorus 1080p a b c 60 120 180 240 300 262.02 265.76 258.68 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Blender Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.0 Blend File: BMW27 - Compute: CPU-Only a b c 60 120 180 240 300 274.06 273.88 273.54
FFmpeg This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Live a b c 40 80 120 160 200 166.56 165.23 165.15 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Live a b c 13 26 39 52 65 59.71 59.30 60.13 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Upload a b c 3 6 9 12 15 10.99 10.95 10.96 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Upload a b c 3 6 9 12 15 12.45 12.44 12.52 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Platform a b c 9 18 27 36 45 40.87 40.82 40.87 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Platform a b c 6 12 18 24 30 25.65 25.64 25.75 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Video On Demand a b c 9 18 27 36 45 40.93 40.83 40.98 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Video On Demand a b c 6 12 18 24 30 25.62 25.64 25.79 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
VVenC VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Fast a b c 0.5238 1.0476 1.5714 2.0952 2.619 2.326 2.307 2.328 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Faster a b c 1.1266 2.2532 3.3798 4.5064 5.633 4.944 4.981 5.007 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Fast a b c 2 4 6 8 10 7.895 7.793 7.909 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Faster a b c 4 8 12 16 20 17.20 17.16 17.13 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
Embree Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Crown a b c 2 4 6 8 10 6.2661 6.1701 6.2381 MIN: 6.22 / MAX: 6.37 MIN: 6.13 / MAX: 6.27 MIN: 6.19 / MAX: 6.33
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Crown a b c 1.2721 2.5442 3.8163 5.0884 6.3605 5.5717 5.6423 5.6536 MIN: 5.54 / MAX: 5.63 MIN: 5.6 / MAX: 5.73 MIN: 5.61 / MAX: 5.73
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon a b c 2 4 6 8 10 7.1256 7.5922 7.6511 MIN: 7.08 / MAX: 7.28 MIN: 7.54 / MAX: 7.76 MIN: 7.6 / MAX: 7.84
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon Obj a b c 2 4 6 8 10 6.8798 6.8673 6.8562 MIN: 6.83 / MAX: 7.02 MIN: 6.82 / MAX: 7.03 MIN: 6.81 / MAX: 7
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon a b c 2 4 6 8 10 7.0973 7.1178 7.1822 MIN: 7.05 / MAX: 7.25 MIN: 7.07 / MAX: 7.27 MIN: 7.13 / MAX: 7.34
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon Obj a b c 2 4 6 8 10 6.1524 6.1496 6.1881 MIN: 6.11 / MAX: 6.3 MIN: 6.11 / MAX: 6.28 MIN: 6.15 / MAX: 6.31
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU Scalar a b c 13 26 39 52 65 56 55 56 MIN: 4 / MAX: 1095 MIN: 4 / MAX: 1098 MIN: 4 / MAX: 1096
OSPRay Studio Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a b c 7K 14K 21K 28K 35K 34796 34945 34885
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c 100K 200K 300K 400K 500K 455472 457540 456087
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c 200K 400K 600K 800K 1000K 907066 910737 905206
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c 100K 200K 300K 400K 500K 465057 464319 463290
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c 200K 400K 600K 800K 1000K 922564 923928 921412
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c 120K 240K 360K 480K 600K 537531 539773 537704
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c 200K 400K 600K 800K 1000K 1070452 1069418 1070341
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c 30K 60K 90K 120K 150K 119816 119397 119077
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c 50K 100K 150K 200K 250K 232873 233029 233165
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c 30K 60K 90K 120K 150K 121353 121476 120927
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c 50K 100K 150K 200K 250K 236620 236145 235545
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a b c 30K 60K 90K 120K 150K 140298 139878 140057
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c 60K 120K 180K 240K 300K 274754 274126 273323
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz noshuffle - Buffer Size: 8MB a b c 1100 2200 3300 4400 5500 5092.9 5107.4 5119.1 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz bitshuffle - Buffer Size: 8MB a b c 1500 3000 4500 6000 7500 7005.6 6985.0 7147.0 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
Apache Spark TPC-H This is a benchmark of Apache Spark using TPC-H data-set. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmarks the Apache Spark in a single-system configuration using spark-submit. The test makes use of https://github.com/ssavvides/tpch-spark/ for facilitating the TPC-H benchmark. Learn more via the OpenBenchmarking.org test page.
DuckDB DuckDB is an in-progress SQL OLAP database management system optimized for analytics and features a vectorized and parallel engine. Learn more via the OpenBenchmarking.org test page.
Benchmark: IMDB
a: The test run did not produce a result.
b: The test run did not produce a result.
c: The test run did not produce a result.
Benchmark: TPC-H Parquet
a: The test run did not produce a result.
b: The test run did not produce a result.
c: The test run did not produce a result.
ScyllaDB This is a benchmark of ScyllaDB and is making use of Apache Cassandra's cassandra-stress for conducting the benchmark. ScyllaDB is an open-source distributed NoSQL data store that is compatible with Apache Cassandra while focusing on higher throughput and lower latency. ScyllaDB uses a sharded design on each node. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Op/s, More Is Better ScyllaDB 5.2.9 Test: Writes a b c 10K 20K 30K 40K 50K 48462 48572 48293
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate ondemand (Boost: Enabled) - CPU Microcode: 0x8608103Java Notes: OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu222.04)Python Notes: Python 3.10.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 16 December 2023 15:07 by user phoronix.
b Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate ondemand (Boost: Enabled) - CPU Microcode: 0x8608103Java Notes: OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu222.04)Python Notes: Python 3.10.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 17 December 2023 05:15 by user phoronix.
c Processor: AMD Ryzen 5 5500U @ 4.06GHz (6 Cores / 12 Threads), Motherboard: NB01 NL5xNU (1.07.11RTR1 BIOS), Chipset: AMD Renoir/Cezanne, Memory: 16GB, Disk: Samsung SSD 970 EVO Plus 500GB, Graphics: AMD Lucienne 512MB (1800/400MHz), Audio: AMD Renoir Radeon HD Audio, Network: Realtek RTL8111/8168/8411 + Intel Wi-Fi 6 AX200
OS: Tuxedo 22.04, Kernel: 6.0.0-1010-oem (x86_64), Desktop: KDE Plasma 5.26.5, Display Server: X Server 1.21.1.3, OpenGL: 4.6 Mesa 22.3.7 (LLVM 14.0.0 DRM 3.48), Vulkan: 1.3.230, Compiler: GCC 11.3.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate ondemand (Boost: Enabled) - CPU Microcode: 0x8608103Java Notes: OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu222.04)Python Notes: Python 3.10.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 17 December 2023 19:21 by user phoronix.