AMD Ryzen 5 5500U testing with a NB01 NL5xNU (1.07.11RTR1 BIOS) and AMD Lucienne 512MB on Tuxedo 22.04 via the Phoronix Test Suite.
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate ondemand (Boost: Enabled) - CPU Microcode: 0x8608103Java Notes: OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu222.04)Python Notes: Python 3.10.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
b c Processor: AMD Ryzen 5 5500U @ 4.06GHz (6 Cores / 12 Threads), Motherboard: NB01 NL5xNU (1.07.11RTR1 BIOS), Chipset: AMD Renoir/Cezanne, Memory: 16GB, Disk: Samsung SSD 970 EVO Plus 500GB, Graphics: AMD Lucienne 512MB (1800/400MHz), Audio: AMD Renoir Radeon HD Audio, Network: Realtek RTL8111/8168/8411 + Intel Wi-Fi 6 AX200
OS: Tuxedo 22.04, Kernel: 6.0.0-1010-oem (x86_64), Desktop: KDE Plasma 5.26.5, Display Server: X Server 1.21.1.3, OpenGL: 4.6 Mesa 22.3.7 (LLVM 14.0.0 DRM 3.48), Vulkan: 1.3.230, Compiler: GCC 11.3.0, File-System: ext4, Screen Resolution: 1920x1080
fhe OpenBenchmarking.org Phoronix Test Suite AMD Ryzen 5 5500U @ 4.06GHz (6 Cores / 12 Threads) NB01 NL5xNU (1.07.11RTR1 BIOS) AMD Renoir/Cezanne 16GB Samsung SSD 970 EVO Plus 500GB AMD Lucienne 512MB (1800/400MHz) AMD Renoir Radeon HD Audio Realtek RTL8111/8168/8411 + Intel Wi-Fi 6 AX200 Tuxedo 22.04 6.0.0-1010-oem (x86_64) KDE Plasma 5.26.5 X Server 1.21.1.3 4.6 Mesa 22.3.7 (LLVM 14.0.0 DRM 3.48) 1.3.230 GCC 11.3.0 ext4 1920x1080 Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL Vulkan Compiler File-System Screen Resolution Fhe Benchmarks System Logs - Transparent Huge Pages: madvise - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Scaling Governor: amd-pstate ondemand (Boost: Enabled) - CPU Microcode: 0x8608103 - OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu222.04) - Python 3.10.6 - itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
a b c Result Overview Phoronix Test Suite 100% 101% 103% 104% 106% DaCapo Benchmark Intel Open Image Denoise Apache Spark TPC-H oneDNN Embree C-Blosc Java SciMark CloverLeaf Redis 7.0.12 + memtier_benchmark Timed Gem5 Compilation Cpuminer-Opt OpenVKL easyWave Xmrig Blender VVenC PyTorch SVT-AV1 Timed FFmpeg Compilation OpenRadioss Apache Cassandra ScyllaDB libavif avifenc FFmpeg Stress-NG NCNN BRL-CAD OpenVINO Neural Magic DeepSparse QuantLib OSPRay Studio Timed GCC Compilation WebP2 Image Encode
fhe cassandra: Writes spark-tpch: 1 - Geometric Mean Of All Queries spark-tpch: 1 - Q01 spark-tpch: 1 - Q02 spark-tpch: 1 - Q03 spark-tpch: 1 - Q04 spark-tpch: 1 - Q05 spark-tpch: 1 - Q06 spark-tpch: 1 - Q07 spark-tpch: 1 - Q08 spark-tpch: 1 - Q09 spark-tpch: 1 - Q10 spark-tpch: 1 - Q11 spark-tpch: 1 - Q12 spark-tpch: 1 - Q13 spark-tpch: 1 - Q14 spark-tpch: 1 - Q15 spark-tpch: 1 - Q16 spark-tpch: 1 - Q17 spark-tpch: 1 - Q18 spark-tpch: 1 - Q19 spark-tpch: 1 - Q20 spark-tpch: 1 - Q21 spark-tpch: 1 - Q22 blender: BMW27 - CPU-Only blender: Classroom - CPU-Only blender: Fishy Cat - CPU-Only blender: Barbershop - CPU-Only blender: Pabellon Barcelona - CPU-Only brl-cad: VGR Performance Metric blosc: blosclz shuffle - 8MB blosc: blosclz noshuffle - 8MB blosc: blosclz bitshuffle - 8MB cloverleaf: clover_bm cloverleaf: clover_bm64_short cpuminer-opt: Magi cpuminer-opt: scrypt cpuminer-opt: Deepcoin cpuminer-opt: Ringcoin cpuminer-opt: Blake-2 S cpuminer-opt: Garlicoin cpuminer-opt: Skeincoin cpuminer-opt: Myriad-Groestl cpuminer-opt: LBC, LBRY Credits cpuminer-opt: Quad SHA-256, Pyrite cpuminer-opt: Triple SHA-256, Onecoin dacapobench: Jython dacapobench: Eclipse dacapobench: GraphChi dacapobench: Tradesoap dacapobench: Tradebeans dacapobench: Spring Boot dacapobench: Apache Kafka dacapobench: Apache Tomcat dacapobench: jMonkeyEngine dacapobench: Apache Cassandra dacapobench: Apache Xalan XSLT dacapobench: Batik SVG Toolkit dacapobench: H2 Database Engine dacapobench: FOP Print Formatter dacapobench: PMD Source Code Analyzer dacapobench: Apache Lucene Search Index dacapobench: Apache Lucene Search Engine dacapobench: Avrora AVR Simulation Framework dacapobench: BioJava Biological Data Framework dacapobench: Zxing 1D/2D Barcode Image Processing dacapobench: H2O In-Memory Platform For Machine Learning easywave: e2Asean Grid + BengkuluSept2007 Source - 240 easywave: e2Asean Grid + BengkuluSept2007 Source - 1200 embree: Pathtracer - Crown embree: Pathtracer ISPC - Crown embree: Pathtracer - Asian Dragon embree: Pathtracer - Asian Dragon Obj embree: Pathtracer ISPC - Asian Dragon embree: Pathtracer ISPC - Asian Dragon Obj ffmpeg: libx264 - Live ffmpeg: libx265 - Live ffmpeg: libx264 - Upload ffmpeg: libx265 - Upload ffmpeg: libx264 - Platform ffmpeg: libx265 - Platform ffmpeg: libx264 - Video On Demand ffmpeg: libx265 - Video On Demand oidn: RT.hdr_alb_nrm.3840x2160 - CPU-Only oidn: RT.ldr_alb_nrm.3840x2160 - CPU-Only oidn: RTLightmap.hdr.4096x4096 - CPU-Only java-scimark2: Composite java-scimark2: Monte Carlo java-scimark2: Fast Fourier Transform java-scimark2: Sparse Matrix Multiply java-scimark2: Dense LU Matrix Factorization java-scimark2: Jacobi Successive Over-Relaxation avifenc: 0 avifenc: 2 avifenc: 6 avifenc: 6, Lossless avifenc: 10, Lossless ncnn: CPU - mobilenet ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU - shufflenet-v2 ncnn: CPU - mnasnet ncnn: CPU - efficientnet-b0 ncnn: CPU - blazeface ncnn: CPU - googlenet ncnn: CPU - vgg16 ncnn: CPU - resnet18 ncnn: CPU - alexnet ncnn: CPU - resnet50 ncnn: CPU - yolov4-tiny ncnn: CPU - squeezenet_ssd ncnn: CPU - regnety_400m ncnn: CPU - vision_transformer ncnn: CPU - FastestDet ncnn: Vulkan GPU - mobilenet ncnn: Vulkan GPU-v2-v2 - mobilenet-v2 ncnn: Vulkan GPU-v3-v3 - mobilenet-v3 ncnn: Vulkan GPU - shufflenet-v2 ncnn: Vulkan GPU - mnasnet ncnn: Vulkan GPU - efficientnet-b0 ncnn: Vulkan GPU - blazeface ncnn: Vulkan GPU - googlenet ncnn: Vulkan GPU - vgg16 ncnn: Vulkan GPU - resnet18 ncnn: Vulkan GPU - alexnet ncnn: Vulkan GPU - resnet50 ncnn: Vulkan GPU - yolov4-tiny ncnn: Vulkan GPU - squeezenet_ssd ncnn: Vulkan GPU - regnety_400m ncnn: Vulkan GPU - vision_transformer ncnn: Vulkan GPU - FastestDet deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: ResNet-50, Baseline - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: ResNet-50, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Synchronous Single-Stream onednn: IP Shapes 1D - f32 - CPU onednn: IP Shapes 3D - f32 - CPU onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Deconvolution Batch shapes_1d - f32 - CPU onednn: Deconvolution Batch shapes_3d - f32 - CPU onednn: Recurrent Neural Network Training - f32 - CPU onednn: Recurrent Neural Network Inference - f32 - CPU openradioss: Bumper Beam openradioss: Chrysler Neon 1M openradioss: Cell Phone Drop Test openradioss: Bird Strike on Windshield openradioss: Rubber O-Ring Seal Installation openradioss: INIVOL and Fluid Structure Interaction Drop Container openvino: Face Detection FP16 - CPU openvino: Face Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP16 - CPU openvino: Person Detection FP32 - CPU openvino: Person Detection FP32 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Vehicle Detection FP16 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection FP16-INT8 - CPU openvino: Face Detection Retail FP16 - CPU openvino: Face Detection Retail FP16 - CPU openvino: Road Segmentation ADAS FP16 - CPU openvino: Road Segmentation ADAS FP16 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Vehicle Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Weld Porosity Detection FP16 - CPU openvino: Face Detection Retail FP16-INT8 - CPU openvino: Face Detection Retail FP16-INT8 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Road Segmentation ADAS FP16-INT8 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Machine Translation EN To DE FP16 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Weld Porosity Detection FP16-INT8 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Person Vehicle Bike Detection FP16 - CPU openvino: Handwritten English Recognition FP16 - CPU openvino: Handwritten English Recognition FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Age Gender Recognition Retail 0013 FP16 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Handwritten English Recognition FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvino: Age Gender Recognition Retail 0013 FP16-INT8 - CPU openvkl: vklBenchmarkCPU ISPC openvkl: vklBenchmarkCPU Scalar ospray-studio: 1 - 4K - 1 - Path Tracer - CPU ospray-studio: 2 - 4K - 1 - Path Tracer - CPU ospray-studio: 3 - 4K - 1 - Path Tracer - CPU ospray-studio: 1 - 4K - 16 - Path Tracer - CPU ospray-studio: 1 - 4K - 32 - Path Tracer - CPU ospray-studio: 2 - 4K - 16 - Path Tracer - CPU ospray-studio: 2 - 4K - 32 - Path Tracer - CPU ospray-studio: 3 - 4K - 16 - Path Tracer - CPU ospray-studio: 3 - 4K - 32 - Path Tracer - CPU ospray-studio: 1 - 1080p - 1 - Path Tracer - CPU ospray-studio: 2 - 1080p - 1 - Path Tracer - CPU ospray-studio: 3 - 1080p - 1 - Path Tracer - CPU ospray-studio: 1 - 1080p - 16 - Path Tracer - CPU ospray-studio: 1 - 1080p - 32 - Path Tracer - CPU ospray-studio: 2 - 1080p - 16 - Path Tracer - CPU ospray-studio: 2 - 1080p - 32 - Path Tracer - CPU ospray-studio: 3 - 1080p - 16 - Path Tracer - CPU ospray-studio: 3 - 1080p - 32 - Path Tracer - CPU pytorch: CPU - 1 - ResNet-50 pytorch: CPU - 1 - ResNet-152 pytorch: CPU - 16 - ResNet-50 pytorch: CPU - 32 - ResNet-50 pytorch: CPU - 64 - ResNet-50 pytorch: CPU - 16 - ResNet-152 pytorch: CPU - 32 - ResNet-152 pytorch: CPU - 64 - ResNet-152 pytorch: CPU - 1 - Efficientnet_v2_l pytorch: CPU - 16 - Efficientnet_v2_l pytorch: CPU - 32 - Efficientnet_v2_l pytorch: CPU - 64 - Efficientnet_v2_l quantlib: Multi-Threaded quantlib: Single-Threaded memtier-benchmark: Redis - 50 - 1:5 memtier-benchmark: Redis - 100 - 1:5 memtier-benchmark: Redis - 50 - 1:10 memtier-benchmark: Redis - 100 - 1:10 scylladb: Writes stress-ng: Hash stress-ng: MMAP stress-ng: NUMA stress-ng: Pipe stress-ng: Poll stress-ng: Zlib stress-ng: Futex stress-ng: MEMFD stress-ng: Mutex stress-ng: Atomic stress-ng: Crypto stress-ng: Malloc stress-ng: Cloning stress-ng: Forking stress-ng: Pthread stress-ng: AVL Tree stress-ng: IO_uring stress-ng: SENDFILE stress-ng: CPU Cache stress-ng: CPU Stress stress-ng: Semaphores stress-ng: Matrix Math stress-ng: Vector Math stress-ng: AVX-512 VNNI stress-ng: Function Call stress-ng: x86_64 RdRand stress-ng: Floating Point stress-ng: Matrix 3D Math stress-ng: Memory Copying stress-ng: Vector Shuffle stress-ng: Mixed Scheduler stress-ng: Socket Activity stress-ng: Wide Vector Math stress-ng: Context Switching stress-ng: Fused Multiply-Add stress-ng: Vector Floating Point stress-ng: Glibc C String Functions stress-ng: Glibc Qsort Data Sorting stress-ng: System V Message Passing svt-av1: Preset 4 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 4K svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 13 - Bosphorus 4K svt-av1: Preset 4 - Bosphorus 1080p svt-av1: Preset 8 - Bosphorus 1080p svt-av1: Preset 12 - Bosphorus 1080p svt-av1: Preset 13 - Bosphorus 1080p build-ffmpeg: Time To Compile build-gcc: Time To Compile build-gem5: Time To Compile vvenc: Bosphorus 4K - Fast vvenc: Bosphorus 4K - Faster vvenc: Bosphorus 1080p - Fast vvenc: Bosphorus 1080p - Faster webp2: Default webp2: Quality 75, Compression Effort 7 webp2: Quality 95, Compression Effort 7 webp2: Quality 100, Compression Effort 5 webp2: Quality 100, Lossless Compression xmrig: KawPow - 1M xmrig: Monero - 1M xmrig: Wownero - 1M xmrig: GhostRider - 1M xmrig: CryptoNight-Heavy - 1M xmrig: CryptoNight-Femto UPX2 - 1M a b c 43060 2.40013873 4.986022 2.69581914 3.14664507 2.95742679 3.34390712 1.32001436 2.90585136 2.58961225 4.39579391 2.77244282 1.40807271 2.25927687 1.70204818 1.66763389 1.74446428 1.4960829 3.62711215 4.20644236 1.56050658 2.41372919 9.0730505 1.30299354 274.06 741.28 352.65 2872.4 926.75 82277 7200.8 5092.9 7005.6 189.24 357.59 160.75 54.02 1836.39 809.38 34010 699.76 8334.91 2685.02 3717.99 12550 17960 6988 14465 4536 11662 14871 4080 6090 11039 6958 9689 1471 1656 5448 1056 3180 5266 4925 14953 7853 2241 5898 19.019 337.427 6.2661 5.5717 7.1256 6.8798 7.0973 6.1524 166.56 59.71 10.99 12.45 40.87 25.65 40.93 25.62 0.23 0.23 0.11 1725.88 1535.01 236.1 2151.26 3008.93 1698.07 253.543 116.484 12.422 18.163 7.764 21.69 5.91 4.59 4.34 4.56 8.56 1.43 16.22 71.21 11.89 9.18 28.6 32.39 15.39 9.56 152.52 5.01 22.07 5.84 4.6 4.31 4.51 8.54 1.44 16.33 70.93 11.74 9.19 28.38 32.34 15.4 9.52 152.59 5.1 4.5966 652.2958 4.4318 225.628 105.8397 28.3037 89.2555 11.1954 52.7471 56.8275 47.8925 20.8692 322.1667 9.2883 226.5659 4.4022 25.7166 116.539 23.5221 42.4995 5.756 518.3032 5.1875 192.7602 52.1174 57.5055 47.764 20.9258 25.7292 116.5721 23.8339 41.9473 41.7532 71.7605 38.4017 26.0312 4.8211 616.5973 4.9083 203.7219 53.1914 56.3421 40.0862 24.9352 4.5918 649.7001 4.4527 224.5697 8.88666 10.6078 22.0491 12.0652 11.2806 5852.6 3525.96 343.31 2140.54 203.69 563.89 457.85 1282.01 1.46 2709.93 15.26 261.73 15.03 265.83 109.51 36.48 1.72 2292.52 383.62 10.39 24.46 163.39 142.7 28 139.15 28.72 451.52 8.84 70.91 56.36 17.64 226.5 192.2 31.2 195.63 20.42 49.86 120.25 3262.61 1.81 52.13 115.05 4919.46 1.19 103 56 34796 35203 39763 455472 907066 465057 922564 537531 1070452 7141 7187 8355 119816 232873 121353 236620 140298 274754 20.28 8.78 12.06 11.87 11.10 5.19 5.24 5.20 5.51 3.49 3.52 3.47 17332.3 2767.1 1762546.44 1749374.05 1746685.71 1767574.95 48462 1369137.05 108.29 107.83 3369398.6 699249.96 714.26 2056701.46 260.26 2874216.79 459 13443.74 2857973.82 805.43 24389.05 87982.48 36.2 154334.52 97345.17 1567966.45 13802.28 12140043.03 31801.4 38176.43 239957.63 4081.42 6663.75 1929.23 655.73 1903.46 4032.3 5998.56 4105.68 246414.84 2044079.46 5581744.4 16476.3 5307606.72 150.19 6906619.23 1.992 15.795 55.755 57.997 7.407 48.995 210.845 262.016 89.934 1914.169 1036.06 2.326 4.944 7.895 17.196 4.21 0.07 0.03 2.06 0.01 1705.2 1706.3 2532.3 248.8 1711.3 1708.3 42968 2.43833726 5.20342541 2.76885891 3.49644971 2.87153316 3.21631765 1.39413798 2.90909719 2.55267692 4.36701632 2.79987454 1.28839672 2.20338345 1.56997681 1.66164672 1.77919817 1.51768637 3.619349 4.26704741 1.77942574 2.54700851 9.54779816 1.53510463 273.88 740.03 353.84 2875.21 926.69 82118 7309.5 5107.4 6985 187.02 354.08 160.98 54.3 1838.98 857.49 34100 698.16 8349.03 2827.68 3697.25 12550 17950 6838 14540 4492 11474 14598 4069 6108 11070 6962 9610 1296 1577 5525 1040 3456 5113 4937 15033 7820 2343 6000 18.968 337.548 6.1701 5.6423 7.5922 6.8673 7.1178 6.1496 165.23 59.30 10.95 12.44 40.82 25.64 40.83 25.64 0.23 0.23 0.12 1727.95 1593.68 251.73 2117.34 2980.35 1696.66 252.633 117.326 12.378 18.107 7.761 21.66 5.9 4.62 4.5 4.49 8.44 1.42 16.34 71.5 11.8 9.11 28.5 32 15.28 9.59 151.15 5.02 21.52 5.98 4.68 4.34 4.53 8.56 1.44 16.29 71.16 11.94 9.18 28.65 32.25 15.27 9.63 150.73 5.11 4.5404 655.2988 4.4256 225.9486 105.2121 28.4701 89.1832 11.2042 52.2794 57.3565 47.5982 20.9979 318.7207 9.3884 224.9789 4.4328 25.5056 117.5916 23.3994 42.722 5.7259 521.4212 5.1603 193.7736 52.2735 57.3613 47.8651 20.8808 26.13 114.7826 23.8804 41.866 41.8304 71.684 38.5047 25.9614 4.8971 611.9244 4.9127 203.5386 53.1658 56.3774 40.0255 24.9725 4.5746 653.1471 4.4513 224.6436 8.88002 11.8178 22.1566 11.9343 11.4455 5868.63 3516.72 345.17 2140.08 203.23 563.74 457.97 1293.89 1.46 2709.89 15.23 262.39 15.2 262.86 109.56 36.46 1.73 2273.39 384.37 10.37 24.47 163.32 142.37 28.07 138.63 28.82 452.11 8.83 70.38 56.78 17.57 227.42 193.34 31.01 195.6 20.42 49.72 120.56 3271.38 1.8 52.91 113.32 4992.2 1.18 104 55 34945 35162 39772 457540 910737 464319 923928 539773 1069418 7122 7200 8351 119397 233029 121476 236145 139878 274126 20.72 8.82 11.91 11.80 11.67 5.24 5.20 5.24 5.51 3.47 3.48 3.52 17273.4 2769.2 1729895.56 1719244.18 1734539.88 1762578.3 48572 1364579.5 107.12 110.72 3503830.27 698234.26 717.65 2147335.67 258.68 2878214.99 459.14 13458.67 2856309.44 800.44 24288.06 88920.64 36.13 154554.5 99090.8 1533029.06 13473.64 11087929.7 31958.65 37936.85 239843.62 4065.44 6665.26 1936.01 652.4 1882.01 4031.9 6043.57 4096.52 240673.83 2025241.43 5561152.6 16206.61 5411002.92 150.52 6910876.85 1.999 15.854 55.865 58.334 7.466 49.296 212.371 265.763 90.466 1919.534 1044.611 2.307 4.981 7.793 17.163 4.20 0.07 0.03 2.07 0.01 1726.1 1726.7 2513.9 249 1724 1725.2 42804 2.38360943 5.06900406 3.07445645 3.21106935 2.76678443 3.14652157 1.39161527 2.90456581 2.65772152 4.18974257 2.88941693 1.30776501 2.18155408 1.50164175 1.67524362 1.769907 1.44492042 3.5092802 4.27269268 1.62940145 2.36286831 8.8638401 1.50756407 273.54 737.06 350.61 2871.44 925.22 82304 7320.8 5119.1 7147 188.40 354.29 161.21 54.67 1841.83 815.83 34180 702.2 8361.85 2704.56 3704.54 12530 17950 6895 14716 4472 12316 14898 4034 6097 11060 6959 9550 1305 1603 5563 985 3327 5225 5314 14948 7857 2255 6103 19.314 337.546 6.2381 5.6536 7.6511 6.8562 7.1822 6.1881 165.15 60.13 10.96 12.52 40.87 25.75 40.98 25.79 0.23 0.23 0.12 1708.61 1530.09 247.29 2122.28 2945.78 1697.6 251.242 117.281 12.397 18.015 7.686 21.73 5.85 4.65 4.46 4.51 8.48 1.47 16.37 71.2 11.81 9.17 28.54 32.21 15.49 9.63 150.53 5.12 21.87 5.98 4.69 4.34 4.43 8.92 1.44 16.51 71.27 11.82 9.21 28.73 32.31 15.09 9.54 150.67 5.05 4.5764 653.4607 4.4369 225.3706 104.8813 28.5625 89.4012 11.1773 52.6821 56.9204 47.9004 20.8656 320.0604 9.3491 225.1522 4.4295 25.4802 117.6689 23.5907 42.3759 5.7625 516.6462 5.1904 192.6513 52.0818 57.5181 47.8078 20.9066 25.6922 116.6805 23.8414 41.934 41.8782 71.5417 38.467 25.9868 4.9405 606.3819 4.9075 203.7566 53.2844 56.2353 40.0783 24.9399 4.595 649.5683 4.4543 224.4901 8.97203 11.9401 22.0731 11.7012 11.4127 5872.67 3510.03 345.57 2137.08 199.58 556.31 455.42 1274.1 1.47 2705.55 15.23 262.13 15.14 264.07 109.18 36.59 1.72 2301.99 384.06 10.38 24.41 163.78 143.25 27.9 136.89 29.2 454.14 8.79 71.78 55.67 17.56 227.68 193.23 31.03 195.76 20.41 50.23 119.36 3265.89 1.81 53.07 113 4942.11 1.19 104 56 34885 35157 39774 456087 905206 463290 921412 537704 1070341 7091 7178 8335 119077 233165 120927 235545 140057 273323 20.82 8.75 11.89 11.93 11.84 5.20 5.25 5.23 5.47 3.49 3.50 3.48 17326.6 2767.4 1743565.71 1735403.81 1763427.67 1775067.44 48293 1366996.53 107.46 108.96 3501532.41 700345.61 718.75 2102668.91 257.48 2889633.43 458.91 13364.44 2858628.37 799.56 24216.78 89239.52 36.6 155512.74 98736.21 1539578.22 13732.88 11469661.56 32000.11 38241.24 240648.24 4082.69 6664.61 1949.12 650.2 1897.68 4036.74 6149.71 4108.67 241097.03 2036534.78 5947777.71 16263.83 5317094 150.56 6891049.07 2.007 15.862 55.863 58.087 7.47 49.198 213.505 258.675 90.046 1917.264 1035.725 2.328 5.007 7.909 17.131 4.24 0.07 0.03 2.07 0.01 1725.8 1724.1 2564.5 248.4 1723.8 1724.4 OpenBenchmarking.org
Apache Spark TPC-H This is a benchmark of Apache Spark using TPC-H data-set. Apache Spark is an open-source unified analytics engine for large-scale data processing and dealing with big data. This test profile benchmarks the Apache Spark in a single-system configuration using spark-submit. The test makes use of https://github.com/ssavvides/tpch-spark/ for facilitating the TPC-H benchmark. Learn more via the OpenBenchmarking.org test page.
Blender Blender is an open-source 3D creation and modeling software project. This test is of Blender's Cycles performance with various sample files. GPU computing via NVIDIA OptiX and NVIDIA CUDA is currently supported as well as HIP for AMD Radeon GPUs and Intel oneAPI for Intel Graphics. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Blender 4.0 Blend File: BMW27 - Compute: CPU-Only c b a 60 120 180 240 300 273.54 273.88 274.06
BRL-CAD BRL-CAD is a cross-platform, open-source solid modeling system with built-in benchmark mode. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org VGR Performance Metric, More Is Better BRL-CAD 7.36 VGR Performance Metric c a b 20K 40K 60K 80K 100K 82304 82277 82118 1. (CXX) g++ options: -std=c++14 -pipe -fvisibility=hidden -fno-strict-aliasing -fno-common -fexceptions -ftemplate-depth-128 -m64 -ggdb3 -O3 -fipa-pta -fstrength-reduce -finline-functions -flto -ltcl8.6 -lregex_brl -lz_brl -lnetpbm -ldl -lm -ltk8.6
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz noshuffle - Buffer Size: 8MB c b a 1100 2200 3300 4400 5500 5119.1 5107.4 5092.9 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.org MB/s, More Is Better C-Blosc 2.11 Test: blosclz bitshuffle - Buffer Size: 8MB c a b 1500 3000 4500 6000 7500 7147.0 7005.6 6985.0 1. (CC) gcc options: -std=gnu99 -O3 -ldl -lrt -lm
OpenBenchmarking.org Seconds, Fewer Is Better CloverLeaf 1.3 Input: clover_bm64_short b c a 80 160 240 320 400 354.08 354.29 357.59 1. (F9X) gfortran options: -O3 -march=native -funroll-loops -fopenmp
Cpuminer-Opt Cpuminer-Opt is a fork of cpuminer-multi that carries a wide range of CPU performance optimizations for measuring the potential cryptocurrency mining performance of the CPU/processor with a wide variety of cryptocurrencies. The benchmark reports the hash speed for the CPU mining performance for the selected cryptocurrency. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Magi c b a 40 80 120 160 200 161.21 160.98 160.75 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Deepcoin c b a 400 800 1200 1600 2000 1841.83 1838.98 1836.39 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Myriad-Groestl b c a 600 1200 1800 2400 3000 2827.68 2704.56 2685.02 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: LBC, LBRY Credits a c b 800 1600 2400 3200 4000 3717.99 3704.54 3697.25 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Quad SHA-256, Pyrite b a c 3K 6K 9K 12K 15K 12550 12550 12530 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
OpenBenchmarking.org kH/s, More Is Better Cpuminer-Opt 23.5 Algorithm: Triple SHA-256, Onecoin a c b 4K 8K 12K 16K 20K 17960 17950 17950 1. (CXX) g++ options: -O2 -lcurl -lz -lpthread -lssl -lcrypto -lgmp
DuckDB DuckDB is an in-progress SQL OLAP database management system optimized for analytics and features a vectorized and parallel engine. Learn more via the OpenBenchmarking.org test page.
Benchmark: IMDB
a: The test run did not produce a result.
b: The test run did not produce a result.
c: The test run did not produce a result.
Benchmark: TPC-H Parquet
a: The test run did not produce a result.
b: The test run did not produce a result.
c: The test run did not produce a result.
easyWave The easyWave software allows simulating tsunami generation and propagation in the context of early warning systems. EasyWave supports making use of OpenMP for CPU multi-threading and there are also GPU ports available but not currently incorporated as part of this test profile. The easyWave tsunami generation software is run with one of the example/reference input files for measuring the CPU execution time. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 240 b a c 5 10 15 20 25 18.97 19.02 19.31 1. (CXX) g++ options: -O3 -fopenmp
OpenBenchmarking.org Seconds, Fewer Is Better easyWave r34 Input: e2Asean Grid + BengkuluSept2007 Source - Time: 1200 a c b 70 140 210 280 350 337.43 337.55 337.55 1. (CXX) g++ options: -O3 -fopenmp
Embree Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Crown a c b 2 4 6 8 10 6.2661 6.2381 6.1701 MIN: 6.22 / MAX: 6.37 MIN: 6.19 / MAX: 6.33 MIN: 6.13 / MAX: 6.27
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Crown c b a 1.2721 2.5442 3.8163 5.0884 6.3605 5.6536 5.6423 5.5717 MIN: 5.61 / MAX: 5.73 MIN: 5.6 / MAX: 5.73 MIN: 5.54 / MAX: 5.63
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon c b a 2 4 6 8 10 7.6511 7.5922 7.1256 MIN: 7.6 / MAX: 7.84 MIN: 7.54 / MAX: 7.76 MIN: 7.08 / MAX: 7.28
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer - Model: Asian Dragon Obj a b c 2 4 6 8 10 6.8798 6.8673 6.8562 MIN: 6.83 / MAX: 7.02 MIN: 6.82 / MAX: 7.03 MIN: 6.81 / MAX: 7
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon c b a 2 4 6 8 10 7.1822 7.1178 7.0973 MIN: 7.13 / MAX: 7.34 MIN: 7.07 / MAX: 7.27 MIN: 7.05 / MAX: 7.25
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.3 Binary: Pathtracer ISPC - Model: Asian Dragon Obj c a b 2 4 6 8 10 6.1881 6.1524 6.1496 MIN: 6.15 / MAX: 6.31 MIN: 6.11 / MAX: 6.3 MIN: 6.11 / MAX: 6.28
FFmpeg This is a benchmark of the FFmpeg multimedia framework. The FFmpeg test profile is making use of a modified version of vbench from Columbia University's Architecture and Design Lab (ARCADE) [http://arcade.cs.columbia.edu/vbench/] that is a benchmark for video-as-a-service workloads. The test profile offers the options of a range of vbench scenarios based on freely distributable video content and offers the options of using the x264 or x265 video encoders for transcoding. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Live a b c 40 80 120 160 200 166.56 165.23 165.15 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Live c a b 13 26 39 52 65 60.13 59.71 59.30 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Upload a c b 3 6 9 12 15 10.99 10.96 10.95 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Upload c a b 3 6 9 12 15 12.52 12.45 12.44 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Platform c a b 9 18 27 36 45 40.87 40.87 40.82 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Platform c a b 6 12 18 24 30 25.75 25.65 25.64 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx264 - Scenario: Video On Demand c a b 9 18 27 36 45 40.98 40.93 40.83 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
OpenBenchmarking.org FPS, More Is Better FFmpeg 6.1 Encoder: libx265 - Scenario: Video On Demand c b a 6 12 18 24 30 25.79 25.64 25.62 1. (CXX) g++ options: -O3 -rdynamic -lpthread -lrt -ldl -lnuma
Java SciMark This test runs the Java version of SciMark 2, which is a benchmark for scientific and numerical computing developed by programmers at the National Institute of Standards and Technology. This benchmark is made up of Fast Foruier Transform, Jacobi Successive Over-relaxation, Monte Carlo, Sparse Matrix Multiply, and dense LU matrix factorization benchmarks. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Mflops, More Is Better Java SciMark 2.2 Computational Test: Composite b a c 400 800 1200 1600 2000 1727.95 1725.88 1708.61
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 c b a 1.3298 2.6596 3.9894 5.3192 6.649 5.85 5.90 5.91 MIN: 5.62 / MAX: 8.37 MIN: 5.59 / MAX: 12.01 MIN: 5.61 / MAX: 13.79 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 a b c 1.0463 2.0926 3.1389 4.1852 5.2315 4.59 4.62 4.65 MIN: 4.44 / MAX: 6.6 MIN: 4.46 / MAX: 6.86 MIN: 4.41 / MAX: 8.93 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: shufflenet-v2 a c b 1.0125 2.025 3.0375 4.05 5.0625 4.34 4.46 4.50 MIN: 4.18 / MAX: 7.26 MIN: 4.17 / MAX: 7.15 MIN: 4.23 / MAX: 32.01 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: mnasnet b c a 1.026 2.052 3.078 4.104 5.13 4.49 4.51 4.56 MIN: 4.35 / MAX: 6.6 MIN: 4.31 / MAX: 6.83 MIN: 4.31 / MAX: 10.03 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: efficientnet-b0 b c a 2 4 6 8 10 8.44 8.48 8.56 MIN: 8.16 / MAX: 11.8 MIN: 8.19 / MAX: 11.92 MIN: 8.24 / MAX: 15.56 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: blazeface b a c 0.3308 0.6616 0.9924 1.3232 1.654 1.42 1.43 1.47 MIN: 1.38 / MAX: 2.05 MIN: 1.38 / MAX: 1.65 MIN: 1.4 / MAX: 2.58 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: googlenet a b c 4 8 12 16 20 16.22 16.34 16.37 MIN: 15.74 / MAX: 23.62 MIN: 15.74 / MAX: 25.53 MIN: 15.93 / MAX: 21.8 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vgg16 c a b 16 32 48 64 80 71.20 71.21 71.50 MIN: 69.67 / MAX: 81.34 MIN: 69.71 / MAX: 119.51 MIN: 69.77 / MAX: 94.55 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet18 b c a 3 6 9 12 15 11.80 11.81 11.89 MIN: 11.48 / MAX: 20.39 MIN: 11.46 / MAX: 17.5 MIN: 11.52 / MAX: 18.09 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: alexnet b c a 3 6 9 12 15 9.11 9.17 9.18 MIN: 8.86 / MAX: 18.25 MIN: 8.9 / MAX: 16.02 MIN: 8.91 / MAX: 16.93 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet50 b c a 7 14 21 28 35 28.50 28.54 28.60 MIN: 26.91 / MAX: 36.74 MIN: 26.96 / MAX: 39.01 MIN: 27.3 / MAX: 36.9 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: yolov4-tiny b c a 8 16 24 32 40 32.00 32.21 32.39 MIN: 31.34 / MAX: 41.92 MIN: 31.5 / MAX: 42.17 MIN: 31.6 / MAX: 38.22 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: squeezenet_ssd b a c 4 8 12 16 20 15.28 15.39 15.49 MIN: 14.79 / MAX: 22.91 MIN: 14.89 / MAX: 31.68 MIN: 14.8 / MAX: 34.89 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: regnety_400m a b c 3 6 9 12 15 9.56 9.59 9.63 MIN: 9.28 / MAX: 16.82 MIN: 9.28 / MAX: 14.12 MIN: 9.32 / MAX: 12.95 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vision_transformer c b a 30 60 90 120 150 150.53 151.15 152.52 MIN: 147.14 / MAX: 193.7 MIN: 147.31 / MAX: 195.77 MIN: 149.07 / MAX: 186.85 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: FastestDet a b c 1.152 2.304 3.456 4.608 5.76 5.01 5.02 5.12 MIN: 4.86 / MAX: 7.02 MIN: 4.86 / MAX: 6.77 MIN: 5.01 / MAX: 7.19 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: mobilenet b c a 5 10 15 20 25 21.52 21.87 22.07 MIN: 20.87 / MAX: 32.66 MIN: 20.95 / MAX: 68.72 MIN: 21.39 / MAX: 54.54 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 a b c 1.3455 2.691 4.0365 5.382 6.7275 5.84 5.98 5.98 MIN: 5.6 / MAX: 8.12 MIN: 5.65 / MAX: 12.76 MIN: 5.6 / MAX: 12.26 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 a b c 1.0553 2.1106 3.1659 4.2212 5.2765 4.60 4.68 4.69 MIN: 4.41 / MAX: 10.59 MIN: 4.47 / MAX: 17.72 MIN: 4.44 / MAX: 18.17 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: shufflenet-v2 a b c 0.9765 1.953 2.9295 3.906 4.8825 4.31 4.34 4.34 MIN: 4.18 / MAX: 6.35 MIN: 4.19 / MAX: 11.27 MIN: 4.17 / MAX: 9.88 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: mnasnet c a b 1.0193 2.0386 3.0579 4.0772 5.0965 4.43 4.51 4.53 MIN: 4.27 / MAX: 6.68 MIN: 4.3 / MAX: 12.69 MIN: 4.34 / MAX: 6.59 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: efficientnet-b0 a b c 2 4 6 8 10 8.54 8.56 8.92 MIN: 8.18 / MAX: 14.64 MIN: 8.27 / MAX: 16.23 MIN: 8.23 / MAX: 96.07 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: blazeface a b c 0.324 0.648 0.972 1.296 1.62 1.44 1.44 1.44 MIN: 1.4 / MAX: 1.6 MIN: 1.39 / MAX: 1.55 MIN: 1.38 / MAX: 1.55 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: googlenet b a c 4 8 12 16 20 16.29 16.33 16.51 MIN: 15.71 / MAX: 22.39 MIN: 15.7 / MAX: 24.12 MIN: 15.79 / MAX: 22.62 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: vgg16 a b c 16 32 48 64 80 70.93 71.16 71.27 MIN: 69.58 / MAX: 79.76 MIN: 69.68 / MAX: 111.48 MIN: 69.92 / MAX: 80.28 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: resnet18 a c b 3 6 9 12 15 11.74 11.82 11.94 MIN: 11.45 / MAX: 18.54 MIN: 11.46 / MAX: 18.3 MIN: 11.32 / MAX: 60.89 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: alexnet b a c 3 6 9 12 15 9.18 9.19 9.21 MIN: 8.82 / MAX: 16.24 MIN: 8.87 / MAX: 12.08 MIN: 8.84 / MAX: 17.28 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: resnet50 a b c 7 14 21 28 35 28.38 28.65 28.73 MIN: 27.04 / MAX: 35.23 MIN: 26.91 / MAX: 36.74 MIN: 26.98 / MAX: 77.04 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: yolov4-tiny b c a 8 16 24 32 40 32.25 32.31 32.34 MIN: 31.59 / MAX: 40.48 MIN: 31.47 / MAX: 40.92 MIN: 31.72 / MAX: 37.86 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: squeezenet_ssd c b a 4 8 12 16 20 15.09 15.27 15.40 MIN: 14.67 / MAX: 21.1 MIN: 14.83 / MAX: 21.07 MIN: 14.87 / MAX: 28.18 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: regnety_400m a c b 3 6 9 12 15 9.52 9.54 9.63 MIN: 9.26 / MAX: 15.11 MIN: 9.27 / MAX: 13.76 MIN: 9.28 / MAX: 14.8 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: vision_transformer c b a 30 60 90 120 150 150.67 150.73 152.59 MIN: 147.18 / MAX: 169.34 MIN: 147.59 / MAX: 166.06 MIN: 148.63 / MAX: 239.62 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: FastestDet c a b 1.1498 2.2996 3.4494 4.5992 5.749 5.05 5.10 5.11 MIN: 4.93 / MAX: 7.03 MIN: 4.99 / MAX: 6.93 MIN: 4.94 / MAX: 6.99 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
oneDNN This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of the Intel oneAPI toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 1D - Data Type: f32 - Engine: CPU b a c 3 6 9 12 15 8.88002 8.88666 8.97203 MIN: 8.5 MIN: 8.48 MIN: 8.58 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: IP Shapes 3D - Data Type: f32 - Engine: CPU a b c 3 6 9 12 15 10.61 11.82 11.94 MIN: 10.34 MIN: 11.43 MIN: 11.64 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU a c b 5 10 15 20 25 22.05 22.07 22.16 MIN: 21.6 MIN: 21.59 MIN: 21.68 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPU c b a 3 6 9 12 15 11.70 11.93 12.07 MIN: 7.48 MIN: 7.67 MIN: 7.42 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPU a c b 3 6 9 12 15 11.28 11.41 11.45 MIN: 10.65 MIN: 10.79 MIN: 10.89 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU a b c 1300 2600 3900 5200 6500 5852.60 5868.63 5872.67 MIN: 5725.89 MIN: 5739.43 MIN: 5729.99 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenBenchmarking.org ms, Fewer Is Better oneDNN 3.3 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU c b a 800 1600 2400 3200 4000 3510.03 3516.72 3525.96 MIN: 3418.07 MIN: 3421.71 MIN: 3431.46 1. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -pie -ldl
OpenRadioss OpenRadioss is an open-source AGPL-licensed finite element solver for dynamic event analysis OpenRadioss is based on Altair Radioss and open-sourced in 2022. This open-source finite element solver is benchmarked with various example models available from https://www.openradioss.org/models/ and https://github.com/OpenRadioss/ModelExchange/tree/main/Examples. This test is currently using a reference OpenRadioss binary build offered via GitHub. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better OpenRadioss 2023.09.15 Model: Bumper Beam a b c 80 160 240 320 400 343.31 345.17 345.57
OpenVINO This is a test of the Intel OpenVINO, a toolkit around neural networks, using its built-in benchmarking support and analyzing the throughput and latency for various models. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU c b a 0.3308 0.6616 0.9924 1.3232 1.654 1.47 1.46 1.46 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection FP16 - Device: CPU c b a 600 1200 1800 2400 3000 2705.55 2709.89 2709.93 MIN: 2050.1 / MAX: 2797.44 MIN: 2071.72 / MAX: 2802.51 MIN: 2079.4 / MAX: 2803.64 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU a c b 4 8 12 16 20 15.26 15.23 15.23 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Detection FP16 - Device: CPU a c b 60 120 180 240 300 261.73 262.13 262.39 MIN: 211.82 / MAX: 297.88 MIN: 196.09 / MAX: 303.22 MIN: 143.56 / MAX: 301.51 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU b c a 4 8 12 16 20 15.20 15.14 15.03 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Detection FP32 - Device: CPU b c a 60 120 180 240 300 262.86 264.07 265.83 MIN: 232.22 / MAX: 299.62 MIN: 228.94 / MAX: 302.08 MIN: 154.12 / MAX: 298.88 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU b a c 20 40 60 80 100 109.56 109.51 109.18 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16 - Device: CPU b a c 8 16 24 32 40 36.46 36.48 36.59 MIN: 25.47 / MAX: 56.25 MIN: 22.64 / MAX: 65.82 MIN: 18.14 / MAX: 60.02 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU b c a 0.3893 0.7786 1.1679 1.5572 1.9465 1.73 1.72 1.72 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection FP16-INT8 - Device: CPU b a c 500 1000 1500 2000 2500 2273.39 2292.52 2301.99 MIN: 1955.16 / MAX: 2430.88 MIN: 1831.17 / MAX: 2445.01 MIN: 1851.05 / MAX: 2439.19 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU b c a 80 160 240 320 400 384.37 384.06 383.62 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16 - Device: CPU b c a 3 6 9 12 15 10.37 10.38 10.39 MIN: 6.89 / MAX: 21.86 MIN: 5.14 / MAX: 21.27 MIN: 5.19 / MAX: 22.54 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU b a c 6 12 18 24 30 24.47 24.46 24.41 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16 - Device: CPU b a c 40 80 120 160 200 163.32 163.39 163.78 MIN: 76.65 / MAX: 208.06 MIN: 98.97 / MAX: 221.05 MIN: 128.38 / MAX: 215.47 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU c a b 30 60 90 120 150 143.25 142.70 142.37 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Vehicle Detection FP16-INT8 - Device: CPU c a b 7 14 21 28 35 27.90 28.00 28.07 MIN: 20.57 / MAX: 43.21 MIN: 17.95 / MAX: 41.21 MIN: 15.75 / MAX: 46.54 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU a b c 30 60 90 120 150 139.15 138.63 136.89 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16 - Device: CPU a b c 7 14 21 28 35 28.72 28.82 29.20 MIN: 24.22 / MAX: 46.15 MIN: 24.57 / MAX: 45.98 MIN: 20.8 / MAX: 91.44 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU c b a 100 200 300 400 500 454.14 452.11 451.52 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Face Detection Retail FP16-INT8 - Device: CPU c b a 2 4 6 8 10 8.79 8.83 8.84 MIN: 5.75 / MAX: 20.16 MIN: 6 / MAX: 16.62 MIN: 6.3 / MAX: 17.28 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU c a b 16 32 48 64 80 71.78 70.91 70.38 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Road Segmentation ADAS FP16-INT8 - Device: CPU c a b 13 26 39 52 65 55.67 56.36 56.78 MIN: 47.49 / MAX: 82.79 MIN: 44.65 / MAX: 78.53 MIN: 33.46 / MAX: 74.61 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU a b c 4 8 12 16 20 17.64 17.57 17.56 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Machine Translation EN To DE FP16 - Device: CPU a b c 50 100 150 200 250 226.50 227.42 227.68 MIN: 142.3 / MAX: 257.79 MIN: 134.56 / MAX: 364.33 MIN: 164.95 / MAX: 261.54 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU b c a 40 80 120 160 200 193.34 193.23 192.20 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Weld Porosity Detection FP16-INT8 - Device: CPU b c a 7 14 21 28 35 31.01 31.03 31.20 MIN: 24.54 / MAX: 43.5 MIN: 21.89 / MAX: 48.55 MIN: 23.75 / MAX: 44.87 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU c a b 40 80 120 160 200 195.76 195.63 195.60 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Person Vehicle Bike Detection FP16 - Device: CPU c a b 5 10 15 20 25 20.41 20.42 20.42 MIN: 15.23 / MAX: 35.17 MIN: 12.43 / MAX: 70.22 MIN: 12.75 / MAX: 44.83 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU c a b 11 22 33 44 55 50.23 49.86 49.72 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16 - Device: CPU c a b 30 60 90 120 150 119.36 120.25 120.56 MIN: 89.26 / MAX: 148.18 MIN: 79.3 / MAX: 149.21 MIN: 91.75 / MAX: 152.9 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU b c a 700 1400 2100 2800 3500 3271.38 3265.89 3262.61 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16 - Device: CPU b a c 0.4073 0.8146 1.2219 1.6292 2.0365 1.80 1.81 1.81 MIN: 1.01 / MAX: 19.26 MIN: 1 / MAX: 10.22 MIN: 0.98 / MAX: 11.34 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU c b a 12 24 36 48 60 53.07 52.91 52.13 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Handwritten English Recognition FP16-INT8 - Device: CPU c b a 30 60 90 120 150 113.00 113.32 115.05 MIN: 81.78 / MAX: 141.49 MIN: 87.68 / MAX: 144.49 MIN: 66.21 / MAX: 156.66 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org FPS, More Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU b c a 1100 2200 3300 4400 5500 4992.20 4942.11 4919.46 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org ms, Fewer Is Better OpenVINO 2023.2.dev Model: Age Gender Recognition Retail 0013 FP16-INT8 - Device: CPU b a c 0.2678 0.5356 0.8034 1.0712 1.339 1.18 1.19 1.19 MIN: 0.57 / MAX: 15.06 MIN: 0.55 / MAX: 10.47 MIN: 0.57 / MAX: 13.65 1. (CXX) g++ options: -fPIC -fsigned-char -ffunction-sections -fdata-sections -O3 -fno-strict-overflow -fwrapv -shared -ldl
OpenBenchmarking.org Items / Sec, More Is Better OpenVKL 2.0.0 Benchmark: vklBenchmarkCPU Scalar c a b 13 26 39 52 65 56 56 55 MIN: 4 / MAX: 1096 MIN: 4 / MAX: 1095 MIN: 4 / MAX: 1098
OSPRay Studio Intel OSPRay Studio is an open-source, interactive visualization and ray-tracing software package. OSPRay Studio makes use of Intel OSPRay, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 1 - Renderer: Path Tracer - Acceleration: CPU a c b 7K 14K 21K 28K 35K 34796 34885 34945
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a c b 100K 200K 300K 400K 500K 455472 456087 457540
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU c a b 200K 400K 600K 800K 1000K 905206 907066 910737
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU c b a 100K 200K 300K 400K 500K 463290 464319 465057
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU c a b 200K 400K 600K 800K 1000K 921412 922564 923928
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU a c b 120K 240K 360K 480K 600K 537531 537704 539773
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 4K - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU b c a 200K 400K 600K 800K 1000K 1069418 1070341 1070452
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU c b a 30K 60K 90K 120K 150K 119077 119397 119816
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU a b c 50K 100K 150K 200K 250K 232873 233029 233165
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU c a b 30K 60K 90K 120K 150K 120927 121353 121476
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU c b a 50K 100K 150K 200K 250K 235545 236145 236620
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path Tracer - Acceleration: CPU b c a 30K 60K 90K 120K 150K 139878 140057 140298
OpenBenchmarking.org ms, Fewer Is Better OSPRay Studio 0.13 Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path Tracer - Acceleration: CPU c b a 60K 120K 180K 240K 300K 273323 274126 274754
PyTorch OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-50 c b a 5 10 15 20 25 20.82 20.72 20.28 MIN: 17.2 / MAX: 23.53 MIN: 16.36 / MAX: 23.54 MIN: 17 / MAX: 22.8
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: ResNet-152 b a c 2 4 6 8 10 8.82 8.78 8.75 MIN: 7.02 / MAX: 10.05 MIN: 6.78 / MAX: 9.95 MIN: 7.08 / MAX: 9.99
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-50 a b c 3 6 9 12 15 12.06 11.91 11.89 MIN: 9.84 / MAX: 13.85 MIN: 10.01 / MAX: 13.53 MIN: 10.04 / MAX: 13.47
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-50 c a b 3 6 9 12 15 11.93 11.87 11.80 MIN: 9.9 / MAX: 13.75 MIN: 9.95 / MAX: 13.33 MIN: 9.72 / MAX: 13.59
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-50 c b a 3 6 9 12 15 11.84 11.67 11.10 MIN: 9.81 / MAX: 13.3 MIN: 9.24 / MAX: 13.18 MIN: 9.27 / MAX: 12.44
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: ResNet-152 b c a 1.179 2.358 3.537 4.716 5.895 5.24 5.20 5.19 MIN: 4.23 / MAX: 5.83 MIN: 4.2 / MAX: 5.88 MIN: 4.28 / MAX: 5.82
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: ResNet-152 c a b 1.1813 2.3626 3.5439 4.7252 5.9065 5.25 5.24 5.20 MIN: 4.31 / MAX: 5.89 MIN: 4.14 / MAX: 5.89 MIN: 4.3 / MAX: 5.83
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: ResNet-152 b c a 1.179 2.358 3.537 4.716 5.895 5.24 5.23 5.20 MIN: 4.28 / MAX: 5.84 MIN: 4.22 / MAX: 5.85 MIN: 4.15 / MAX: 5.77
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 1 - Model: Efficientnet_v2_l b a c 1.2398 2.4796 3.7194 4.9592 6.199 5.51 5.51 5.47 MIN: 4.61 / MAX: 6.06 MIN: 4.63 / MAX: 5.95 MIN: 4.63 / MAX: 6.05
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 16 - Model: Efficientnet_v2_l c a b 0.7853 1.5706 2.3559 3.1412 3.9265 3.49 3.49 3.47 MIN: 2.92 / MAX: 3.88 MIN: 2.86 / MAX: 3.79 MIN: 2.91 / MAX: 3.79
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 32 - Model: Efficientnet_v2_l a c b 0.792 1.584 2.376 3.168 3.96 3.52 3.50 3.48 MIN: 2.9 / MAX: 3.86 MIN: 2.92 / MAX: 3.8 MIN: 2.9 / MAX: 3.78
OpenBenchmarking.org batches/sec, More Is Better PyTorch 2.1 Device: CPU - Batch Size: 64 - Model: Efficientnet_v2_l b c a 0.792 1.584 2.376 3.168 3.96 3.52 3.48 3.47 MIN: 2.89 / MAX: 3.88 MIN: 2.92 / MAX: 3.82 MIN: 2.92 / MAX: 3.91
QuantLib QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.32 Configuration: Multi-Threaded a c b 4K 8K 12K 16K 20K 17332.3 17326.6 17273.4 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.32 Configuration: Single-Threaded b c a 600 1200 1800 2400 3000 2769.2 2767.4 2767.1 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
ScyllaDB This is a benchmark of ScyllaDB and is making use of Apache Cassandra's cassandra-stress for conducting the benchmark. ScyllaDB is an open-source distributed NoSQL data store that is compatible with Apache Cassandra while focusing on higher throughput and lower latency. ScyllaDB uses a sharded design on each node. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Op/s, More Is Better ScyllaDB 5.2.9 Test: Writes b a c 10K 20K 30K 40K 50K 48572 48462 48293
SVT-AV1 This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 4 - Input: Bosphorus 4K c b a 0.4516 0.9032 1.3548 1.8064 2.258 2.007 1.999 1.992 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 8 - Input: Bosphorus 4K c b a 4 8 12 16 20 15.86 15.85 15.80 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 12 - Input: Bosphorus 4K b c a 13 26 39 52 65 55.87 55.86 55.76 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 13 - Input: Bosphorus 4K b c a 13 26 39 52 65 58.33 58.09 58.00 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 4 - Input: Bosphorus 1080p c b a 2 4 6 8 10 7.470 7.466 7.407 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 8 - Input: Bosphorus 1080p b c a 11 22 33 44 55 49.30 49.20 49.00 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 12 - Input: Bosphorus 1080p c b a 50 100 150 200 250 213.51 212.37 210.85 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.8 Encoder Mode: Preset 13 - Input: Bosphorus 1080p b a c 60 120 180 240 300 265.76 262.02 258.68 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
VVenC VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Fast c a b 0.5238 1.0476 1.5714 2.0952 2.619 2.328 2.326 2.307 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Faster c b a 1.1266 2.2532 3.3798 4.5064 5.633 5.007 4.981 4.944 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Fast c a b 2 4 6 8 10 7.909 7.895 7.793 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Faster a b c 4 8 12 16 20 17.20 17.16 17.13 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
WebP2 Image Encode This is a test of Google's libwebp2 library with the WebP2 image encode utility and using a sample 6000x4000 pixel JPEG image as the input, similar to the WebP/libwebp test profile. WebP2 is currently experimental and under heavy development as ultimately the successor to WebP. WebP2 supports 10-bit HDR, more efficienct lossy compression, improved lossless compression, animation support, and full multi-threading support compared to WebP. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MP/s, More Is Better WebP2 Image Encode 20220823 Encode Settings: Default c a b 0.954 1.908 2.862 3.816 4.77 4.24 4.21 4.20 1. (CXX) g++ options: -msse4.2 -fno-rtti -O3 -ldl
Xmrig Xmrig is an open-source cross-platform CPU/GPU miner for RandomX, KawPow, CryptoNight and AstroBWT. This test profile is setup to measure the Xmrig CPU mining performance. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: KawPow - Hash Count: 1M b c a 400 800 1200 1600 2000 1726.1 1725.8 1705.2 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: Monero - Hash Count: 1M b c a 400 800 1200 1600 2000 1726.7 1724.1 1706.3 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: Wownero - Hash Count: 1M c a b 600 1200 1800 2400 3000 2564.5 2532.3 2513.9 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: GhostRider - Hash Count: 1M b a c 50 100 150 200 250 249.0 248.8 248.4 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: CryptoNight-Heavy - Hash Count: 1M b c a 400 800 1200 1600 2000 1724.0 1723.8 1711.3 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
OpenBenchmarking.org H/s, More Is Better Xmrig 6.21 Variant: CryptoNight-Femto UPX2 - Hash Count: 1M b c a 400 800 1200 1600 2000 1725.2 1724.4 1708.3 1. (CXX) g++ options: -fexceptions -fno-rtti -maes -O3 -Ofast -static-libgcc -static-libstdc++ -rdynamic -lssl -lcrypto -luv -lpthread -lrt -ldl -lhwloc
a Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate ondemand (Boost: Enabled) - CPU Microcode: 0x8608103Java Notes: OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu222.04)Python Notes: Python 3.10.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 16 December 2023 15:07 by user phoronix.
b Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate ondemand (Boost: Enabled) - CPU Microcode: 0x8608103Java Notes: OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu222.04)Python Notes: Python 3.10.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 17 December 2023 05:15 by user phoronix.
c Processor: AMD Ryzen 5 5500U @ 4.06GHz (6 Cores / 12 Threads), Motherboard: NB01 NL5xNU (1.07.11RTR1 BIOS), Chipset: AMD Renoir/Cezanne, Memory: 16GB, Disk: Samsung SSD 970 EVO Plus 500GB, Graphics: AMD Lucienne 512MB (1800/400MHz), Audio: AMD Renoir Radeon HD Audio, Network: Realtek RTL8111/8168/8411 + Intel Wi-Fi 6 AX200
OS: Tuxedo 22.04, Kernel: 6.0.0-1010-oem (x86_64), Desktop: KDE Plasma 5.26.5, Display Server: X Server 1.21.1.3, OpenGL: 4.6 Mesa 22.3.7 (LLVM 14.0.0 DRM 3.48), Vulkan: 1.3.230, Compiler: GCC 11.3.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vProcessor Notes: Scaling Governor: amd-pstate ondemand (Boost: Enabled) - CPU Microcode: 0x8608103Java Notes: OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu222.04)Python Notes: Python 3.10.6Security Notes: itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + mmio_stale_data: Not affected + retbleed: Mitigation of untrained return thunk; SMT enabled with STIBP protection + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Retpolines IBPB: conditional STIBP: always-on RSB filling PBRSB-eIBRS: Not affected + srbds: Not affected + tsx_async_abort: Not affected
Testing initiated at 17 December 2023 19:21 by user phoronix.