Intel Core i7-8565U testing with a Dell 0KTW76 (1.17.0 BIOS) and Intel UHD 620 WHL GT2 15GB on Ubuntu 22.04 via the Phoronix Test Suite.
a Processor: Intel Core i7-8565U @ 4.60GHz (4 Cores / 8 Threads), Motherboard: Dell 0KTW76 (1.17.0 BIOS), Chipset: Intel Cannon Point-LP, Memory: 16GB, Disk: SK hynix PC401 NVMe 256GB, Graphics: Intel UHD 620 WHL GT2 15GB (1150MHz), Audio: Realtek ALC3271, Network: Qualcomm Atheros QCA6174 802.11ac
OS: Ubuntu 22.04, Kernel: 5.19.0-rc6-phx-retbleed (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.0.1, OpenCL: OpenCL 3.0, Vulkan: 1.3.204, Compiler: GCC 11.3.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xf0 - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.18+10-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.6Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of IBRS IBPB: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Not affected
b OS: Ubuntu 22.04, Kernel: 5.19.0-rc6-phx-retbleed (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.0.1, OpenCL: OpenCL 3.0, Vulkan: 1.3.204, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xf0 - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of IBRS IBPB: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Not affected
dddxxx OpenBenchmarking.org Phoronix Test Suite Intel Core i7-8565U @ 4.60GHz (4 Cores / 8 Threads) Dell 0KTW76 (1.17.0 BIOS) Intel Cannon Point-LP 16GB SK hynix PC401 NVMe 256GB Intel UHD 620 WHL GT2 15GB (1150MHz) Realtek ALC3271 Qualcomm Atheros QCA6174 802.11ac Ubuntu 22.04 5.19.0-rc6-phx-retbleed (x86_64) GNOME Shell 42.2 X Server + Wayland 4.6 Mesa 22.0.1 OpenCL 3.0 1.3.204 GCC 11.3.0 GCC 11.4.0 ext4 1920x1080 Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server OpenGL OpenCL Vulkan Compilers File-System Screen Resolution Dddxxx Benchmarks System Logs - Transparent Huge Pages: madvise - a: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - b: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - NONE / errors=remount-ro,relatime,rw / Block Size: 4096 - Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xf0 - Thermald 2.4.9 - a: OpenJDK Runtime Environment (build 11.0.18+10-post-Ubuntu-0ubuntu122.04) - b: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04) - a: Python 3.10.6 - b: Python 3.10.12 - itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of IBRS IBPB: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Not affected
a vs. b Comparison Phoronix Test Suite Baseline +48% +48% +96% +96% +144% +144% +192% +192% 61.2% 61.1% 42.1% 40.2% 37.1% 36.4% 34.9% 34.6% 34.5% 33.9% 31.1% 28.4% 27.1% 24.3% 24.1% 23.8% 23% 21.9% 21.7% 20.7% 19.3% 19% 18.4% 18.4% 18.2% 17.8% 17.6% 17.6% 17.1% 15.5% 15% 14.1% 13.8% 13.1% 12.1% 12% 12% 11.3% 9.1% 8.8% 8.4% 8.1% 7.8% 7.6% 7.3% 7.2% 6.9% 6.5% 5.1% 5.1% 4.9% 4.8% 4.7% 4.3% 4.3% 4.2% 4% 3.9% 3.7% 3.5% 3.4% 3.3% 3.2% 3.2% 2.8% 2.6% 2.3% 2.3% 2% 1 192.1% B.L.N.Q.A.S.I - A.M.S B.L.N.Q.A.S.I - A.M.S CPU - shufflenet-v2 Redis - 500 - 1:5 2 37.7% 4 37.5% Redis - 500 - 1:10 MMAP Redis - 100 - 1:5 Redis - 50 - 1:5 CPU - mnasnet Redis - 100 - 1:10 MEMFD 31.3% Redis - 50 - 1:10 Futex Forking B.L.N.Q.A - A.M.S CPU Cache Crypto Zlib Preset 12 - Bosphorus 4K 22.5% Hash B.L.N.Q.A - A.M.S Pthread CPU-v3-v3 - mobilenet-v3 SENDFILE N.T.C.B.b.u.S - A.M.S N.T.C.D.m - A.M.S N.T.C.B.b.u.S - A.M.S N.T.C.D.m - A.M.S AVL Tree N.T.C.B.b.u.c - A.M.S N.T.C.B.b.u.c - A.M.S Malloc Preset 8 - Bosphorus 1080p 15.2% Preset 4 - Bosphorus 4K 15.2% Mutex gravity_spheres_volume/dim_512/scivis/real_time S.N.1 200 - 100 - 500 13.3% 100 - 100 - 500 100 - 100 - 500 12.2% NUMA Preset 4 - Bosphorus 1080p Poll Atomic Preset 13 - Bosphorus 4K 9.9% 200 - 100 - 500 particle_volume/ao/real_time 9.1% Fused Multiply-Add 8.8% CPU-v2-v2 - mobilenet-v2 C.S.9.P.Y.P - A.M.S V.F.P Pipe CPU Stress IO_uring G.Q.D.S 500 - 100 - 200 7.1% 500 - 1 - 500 7% 500 - 100 - 500 6.9% Cloning CPU - mobilenet 6.7% 4 - 256 - 57 6.6% C.S.9.P.Y.P - A.M.S Vulkan GPU - regnety_400m 6% 8 - 256 - 57 5.7% G.C.S.F 5.3% S.V.M.P 5.3% C.D.Y.C.S.I - A.M.S Function Call particle_volume/pathtracer/real_time 5% CPU - resnet50 gravity_spheres_volume/dim_512/pathtracer/real_time 4.8% 128 CPU - squeezenet_ssd Pathtracer ISPC - Asian Dragon Obj 4.4% 500 - 100 - 500 8 - 256 - 32 x86_64 RdRand 4.3% 500 - 1 - 200 4.2% 500 - 1 - 500 Context Switching 200 - 1 - 200 4% 500 - 100 - 200 CPU - efficientnet-b0 Vulkan GPU - FastestDet 100 - 100 - 200 Matrix 3D Math 100 - 100 - 200 3.3% C.D.Y.C.S.I - A.M.S Vulkan GPU - squeezenet_ssd 1:10 3.1% R.5.B - A.M.S 3.1% Bosphorus 1080p - Fast 3% CPU - FastestDet 3% Vulkan GPU - mobilenet R.5.B - A.M.S 2.7% 200 - 1 - 500 2.7% Floating Point Time To Compile 2.5% CPU - yolov4-tiny 2.4% C.C.R.5.I - A.M.S 2.4% C.C.R.5.I - A.M.S 2.3% Chimera 1080p 8 - 256 - 512 CPU - blazeface 2.2% particle_volume/scivis/real_time 2.2% 1:100 2.1% Vulkan GPU - blazeface 2.1% Preset 12 - Bosphorus 1080p Pathtracer - Crown 2% SQLite Neural Magic DeepSparse Neural Magic DeepSparse NCNN Redis 7.0.12 + memtier_benchmark SQLite SQLite Redis 7.0.12 + memtier_benchmark Stress-NG Redis 7.0.12 + memtier_benchmark Redis 7.0.12 + memtier_benchmark NCNN Redis 7.0.12 + memtier_benchmark Stress-NG Redis 7.0.12 + memtier_benchmark Stress-NG Stress-NG Neural Magic DeepSparse Stress-NG Stress-NG Stress-NG SVT-AV1 Stress-NG Neural Magic DeepSparse Stress-NG NCNN Stress-NG Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Neural Magic DeepSparse Stress-NG Neural Magic DeepSparse Neural Magic DeepSparse Stress-NG SVT-AV1 SVT-AV1 Stress-NG OSPRay dav1d Apache IoTDB Apache IoTDB Apache IoTDB Stress-NG SVT-AV1 Stress-NG Stress-NG SVT-AV1 Apache IoTDB OSPRay Stress-NG NCNN Neural Magic DeepSparse Stress-NG Stress-NG Stress-NG Stress-NG Stress-NG Apache IoTDB Apache IoTDB Apache IoTDB Stress-NG NCNN Liquid-DSP Neural Magic DeepSparse NCNN Liquid-DSP Stress-NG Stress-NG Neural Magic DeepSparse Stress-NG OSPRay NCNN OSPRay libxsmm NCNN Embree Apache IoTDB Liquid-DSP Stress-NG Apache IoTDB Apache IoTDB Stress-NG Apache IoTDB Apache IoTDB NCNN NCNN Apache IoTDB Stress-NG Apache IoTDB Neural Magic DeepSparse NCNN Memcached Neural Magic DeepSparse VVenC NCNN NCNN Neural Magic DeepSparse Apache IoTDB Stress-NG Build2 NCNN Neural Magic DeepSparse Neural Magic DeepSparse dav1d Liquid-DSP NCNN OSPRay Memcached NCNN SVT-AV1 Embree a b
dddxxx build-llvm: Unix Makefiles build-llvm: Ninja build-godot: Time To Compile apache-iotdb: 500 - 100 - 500 apache-iotdb: 500 - 100 - 500 xonotic: 1920 x 1080 - Ultimate build2: Time To Compile vvenc: Bosphorus 4K - Fast xonotic: 1920 x 1080 - Ultra xonotic: 1920 x 1080 - High oidn: RTLightmap.hdr.4096x4096 - CPU-Only libxsmm: 128 vvenc: Bosphorus 4K - Faster embree: Pathtracer - Crown embree: Pathtracer - Asian Dragon Obj apache-iotdb: 500 - 100 - 200 apache-iotdb: 500 - 100 - 200 embree: Pathtracer ISPC - Crown xonotic: 1920 x 1080 - Low apache-iotdb: 200 - 100 - 500 apache-iotdb: 200 - 100 - 500 ospray: particle_volume/pathtracer/real_time embree: Pathtracer ISPC - Asian Dragon Obj oidn: RT.hdr_alb_nrm.3840x2160 - CPU-Only oidn: RT.ldr_alb_nrm.3840x2160 - CPU-Only ncnn: Vulkan GPU - FastestDet ncnn: Vulkan GPU - vision_transformer ncnn: Vulkan GPU - regnety_400m ncnn: Vulkan GPU - squeezenet_ssd ncnn: Vulkan GPU - yolov4-tiny ncnn: Vulkan GPU - resnet50 ncnn: Vulkan GPU - alexnet ncnn: Vulkan GPU - resnet18 ncnn: Vulkan GPU - vgg16 ncnn: Vulkan GPU - googlenet ncnn: Vulkan GPU - blazeface ncnn: Vulkan GPU - efficientnet-b0 ncnn: Vulkan GPU - mnasnet ncnn: Vulkan GPU - shufflenet-v2 ncnn: Vulkan GPU-v3-v3 - mobilenet-v3 ncnn: Vulkan GPU-v2-v2 - mobilenet-v2 ncnn: Vulkan GPU - mobilenet ospray: particle_volume/scivis/real_time ncnn: CPU - FastestDet ncnn: CPU - vision_transformer ncnn: CPU - regnety_400m ncnn: CPU - squeezenet_ssd ncnn: CPU - yolov4-tiny ncnn: CPU - resnet50 ncnn: CPU - alexnet ncnn: CPU - resnet18 ncnn: CPU - vgg16 ncnn: CPU - googlenet ncnn: CPU - blazeface ncnn: CPU - efficientnet-b0 ncnn: CPU - mnasnet ncnn: CPU - shufflenet-v2 ncnn: CPU-v3-v3 - mobilenet-v3 ncnn: CPU-v2-v2 - mobilenet-v2 ncnn: CPU - mobilenet embree: Pathtracer - Asian Dragon svt-av1: Preset 4 - Bosphorus 4K vvenc: Bosphorus 1080p - Fast sqlite: 4 sqlite: 2 embree: Pathtracer ISPC - Asian Dragon deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering - Asynchronous Multi-Stream libxsmm: 64 cassandra: Writes z3: 2.smt2 libxsmm: 32 ospray: particle_volume/ao/real_time deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream deepsparse: BERT-Large, NLP Question Answering, Sparse INT8 - Asynchronous Multi-Stream apache-iotdb: 500 - 1 - 500 apache-iotdb: 500 - 1 - 500 ospray: gravity_spheres_volume/dim_512/scivis/real_time apache-iotdb: 100 - 100 - 500 apache-iotdb: 100 - 100 - 500 memtier-benchmark: Redis - 500 - 1:10 memtier-benchmark: Redis - 500 - 1:5 memcached: 1:5 memtier-benchmark: Redis - 50 - 1:10 memtier-benchmark: Redis - 100 - 1:10 memtier-benchmark: Redis - 100 - 1:5 memtier-benchmark: Redis - 50 - 1:5 memcached: 1:10 memcached: 1:100 ospray: gravity_spheres_volume/dim_512/ao/real_time svt-av1: Preset 8 - Bosphorus 4K deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Asynchronous Multi-Stream vvenc: Bosphorus 1080p - Faster deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream deepsparse: NLP Text Classification, BERT base uncased SST2 - Asynchronous Multi-Stream ospray: gravity_spheres_volume/dim_512/pathtracer/real_time sqlite: 1 apache-iotdb: 200 - 100 - 200 apache-iotdb: 200 - 100 - 200 deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: CV Segmentation, 90% Pruned YOLACT Pruned - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Token Classification, BERT base uncased conll2003 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream deepsparse: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Asynchronous Multi-Stream dav1d: Summer Nature 4K deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream deepsparse: NLP Document Classification, oBERT base uncased on IMDB - Asynchronous Multi-Stream dav1d: Chimera 1080p 10-bit z3: 1.smt2 deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: NLP Text Classification, DistilBERT mnli - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO, Sparse INT8 - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: CV Detection, YOLOv5s COCO - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: ResNet-50, Baseline - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: CV Classification, ResNet-50 ImageNet - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream deepsparse: ResNet-50, Sparse INT8 - Asynchronous Multi-Stream svt-av1: Preset 4 - Bosphorus 1080p dav1d: Chimera 1080p encode-opus: WAV To Opus Encode apache-iotdb: 100 - 100 - 200 apache-iotdb: 100 - 100 - 200 quantlib: stress-ng: IO_uring apache-iotdb: 500 - 1 - 200 apache-iotdb: 500 - 1 - 200 stress-ng: Malloc stress-ng: Cloning stress-ng: MMAP stress-ng: MEMFD stress-ng: Zlib stress-ng: Pipe stress-ng: Atomic stress-ng: NUMA stress-ng: Pthread stress-ng: x86_64 RdRand stress-ng: Function Call stress-ng: System V Message Passing stress-ng: Socket Activity stress-ng: Matrix 3D Math liquid-dsp: 8 - 256 - 512 stress-ng: Vector Floating Point liquid-dsp: 4 - 256 - 512 liquid-dsp: 2 - 256 - 512 liquid-dsp: 1 - 256 - 32 stress-ng: AVL Tree stress-ng: Floating Point stress-ng: Hash liquid-dsp: 1 - 256 - 512 liquid-dsp: 2 - 256 - 57 liquid-dsp: 2 - 256 - 32 liquid-dsp: 1 - 256 - 57 stress-ng: Memory Copying liquid-dsp: 8 - 256 - 32 stress-ng: Vector Shuffle stress-ng: CPU Cache liquid-dsp: 8 - 256 - 57 liquid-dsp: 4 - 256 - 57 liquid-dsp: 4 - 256 - 32 stress-ng: Forking stress-ng: Mutex stress-ng: Glibc Qsort Data Sorting stress-ng: Matrix Math stress-ng: CPU Stress stress-ng: SENDFILE stress-ng: Crypto stress-ng: Wide Vector Math stress-ng: Poll stress-ng: Glibc C String Functions stress-ng: Fused Multiply-Add stress-ng: Context Switching stress-ng: Vector Math stress-ng: Semaphores stress-ng: Futex apache-iotdb: 200 - 1 - 500 apache-iotdb: 200 - 1 - 500 apache-iotdb: 200 - 1 - 200 apache-iotdb: 200 - 1 - 200 apache-iotdb: 100 - 1 - 500 apache-iotdb: 100 - 1 - 500 svt-av1: Preset 12 - Bosphorus 4K svt-av1: Preset 13 - Bosphorus 4K svt-av1: Preset 8 - Bosphorus 1080p apache-iotdb: 100 - 1 - 200 apache-iotdb: 100 - 1 - 200 dav1d: Summer Nature 1080p vkpeak: fp32-scalar svt-av1: Preset 12 - Bosphorus 1080p svt-av1: Preset 13 - Bosphorus 1080p a b 2967.528 2892.088 1372.604 984.7 4948660.19 59.1708882 555.936 1.180 77.9338736 91.1145989 0.06 79.9 2.693 3.5218 3.8229 212.43 8900311.03 3.7369 205.6062326 550.47 8609112.97 52.5904 4.1670 0.12 0.12 5.66 240.03 9.82 15.97 39.39 33.18 11.67 12.73 97.70 16.29 0.94 9.22 5.23 3.66 5.11 6.67 28.15 1.37399 5.39 233.96 10.03 16.55 38.37 36.02 11.68 12.61 97.42 16.23 0.91 9.78 6.04 4.29 5.14 6.69 25.93 4.2183 1.092 3.905 125.830 123.465 4.7883 764.2676 2.6677 90.9 26422 130.941 47.7 1.49334 93.8482 21.2985 75.4 611157.73 0.622182 342.21 13467917.31 914462.02 890374.23 525368.39 1109812.44 1028641.19 1016703.27 1039311.12 502050.43 496309.76 0.745146 9.490 30.5398 65.4250 9.882 211.9830 9.4553 1.06164 30.327 144.53 12613348.93 669.1809 2.9815 979.1797 2.0309 244.5617 8.3009 69.5593 28.9185 67.01 855.6769 2.3527 191.30 43.973 116.2683 17.5124 162.1647 12.5441 137.9489 14.6720 65.9517 30.5673 68.0491 29.6585 10.2465 196.7843 4.260 244.12 35.396 107.84 15017343.49 2379.8 170492.16 15.75 1036429.93 375933.99 669.41 20.86 56.59 273.05 1422109.84 224.51 53.95 34974.26 3267.31 2058.70 2652462.62 2495.57 383.49 35719000 7289.32 25830500 15179500 46516500 16.41 743.83 523699.70 7721800 73795500 85550500 43429000 1078.73 177945000 2431.22 929178.43 140345000 119590000 140280000 9551.79 604156.72 69.13 17318.7 7310.05 36779.86 4590.53 147314.85 284300.80 2485856.07 2978232.73 705297.96 12666.19 3218725.51 486078.68 29.65 1333775.59 17.34 816344.2 34.81 996242.81 35.497 34.049 35.739 22.62 529599 273.35 268.33 154.597 202.852 2929.911 2908.878 1384.766 1026.93 4629287.79 58.9499391 569.609 1.170 77.5195887 90.9080095 0.06 83.7 2.672 3.4542 3.8503 198.41 9246224.42 3.6962 205.4486260 485.93 9394851.37 50.0829 3.9923 0.12 0.12 5.47 235.51 10.41 15.48 39.17 33.60 11.69 12.56 96.26 16.19 0.96 9.34 5.26 3.67 5.11 6.67 27.38 1.34467 5.55 232.17 10.12 15.80 39.31 34.33 11.64 12.62 98.75 16.19 0.93 9.43 4.49 3.02 4.31 6.15 27.68 4.1998 0.948 3.790 173.045 170.014 4.7636 614.6793 3.2461 90.8 26438 131.405 48.0 1.3692 58.2368 34.3231 70.46 637041.62 0.709986 387.15 12006039.14 1253523.69 1248610.70 518308.20 1455171.86 1377833.48 1372033.18 1398883.01 486807.49 485922.63 0.734783 9.436 30.5664 65.3564 9.868 179.0772 11.1719 1.012953 88.585 143.24 12464073 628.3639 3.2320 835.8517 2.3883 245.6430 8.2832 70.8213 28.4640 66.38 864.9932 2.3352 190.87 43.419 98.7143 20.7264 154.2586 12.9502 139.0475 14.6069 68.0003 29.7603 69.6181 28.9622 10.336 195.4858 4.773 249.62 35.426 104.43 15526243.84 2404.8 182936.68 15.11 1040683.13 434156.10 715.58 28.46 43.09 335.91 1533355.11 249.99 60.49 42229.61 3134.04 2164.20 2519916.10 2508.63 396.03 36523000 7877.70 25812500 15023500 46825000 19.3 762.86 638650.51 7710850 73620500 85664000 43427500 1088.75 185540000 2431.01 1153192.75 132830000 112205000 139950000 12140.60 694562.47 74.13 17116.79 7866.4 43752.37 5684.53 148388.79 318466.50 2360589.10 2737483.30 733269.79 12444.33 3241408.21 624014.17 29.75 1298988.31 16.68 812170.86 34.48 1009840.28 28.978 30.975 31.014 22.93 524660.18 310.94 268.33 157.692 206.769 OpenBenchmarking.org
Apache IoTDB OpenBenchmarking.org Average Latency, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 500 a b 200 400 600 800 1000 984.70 1026.93 MAX: 4944.15 MAX: 6211.26
Xonotic This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game for this open-source first person shooter title. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Xonotic 0.8.6 Resolution: 1920 x 1080 - Effects Quality: Ultimate b a 13 26 39 52 65 SE +/- 0.06, N = 2 SE +/- 0.07, N = 2 58.95 59.17 MIN: 24 / MAX: 94 MIN: 24 / MAX: 94
VVenC VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Fast b a 0.2655 0.531 0.7965 1.062 1.3275 SE +/- 0.013, N = 2 SE +/- 0.013, N = 2 1.170 1.180 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
Xonotic This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game for this open-source first person shooter title. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Xonotic 0.8.6 Resolution: 1920 x 1080 - Effects Quality: Ultra b a 20 40 60 80 100 SE +/- 0.41, N = 2 SE +/- 0.16, N = 2 77.52 77.93 MIN: 32 / MAX: 120 MIN: 34 / MAX: 120
OpenBenchmarking.org Frames Per Second, More Is Better Xonotic 0.8.6 Resolution: 1920 x 1080 - Effects Quality: High b a 20 40 60 80 100 SE +/- 0.32, N = 2 SE +/- 0.04, N = 2 90.91 91.11 MIN: 38 / MAX: 130 MIN: 39 / MAX: 130
libxsmm Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 128 a b 20 40 60 80 100 SE +/- 4.60, N = 2 SE +/- 4.10, N = 2 79.9 83.7 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2
VVenC VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 4K - Video Preset: Faster b a 0.6059 1.2118 1.8177 2.4236 3.0295 SE +/- 0.045, N = 2 SE +/- 0.052, N = 2 2.672 2.693 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
Embree Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer - Model: Crown b a 0.7924 1.5848 2.3772 3.1696 3.962 SE +/- 0.0668, N = 2 SE +/- 0.0068, N = 2 3.4542 3.5218 MIN: 2.82 / MAX: 4.4 MIN: 2.81 / MAX: 4.3
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer - Model: Asian Dragon Obj a b 0.8663 1.7326 2.5989 3.4652 4.3315 SE +/- 0.0521, N = 2 SE +/- 0.0122, N = 2 3.8229 3.8503 MIN: 3.19 / MAX: 4.76 MIN: 3.22 / MAX: 4.83
Apache IoTDB OpenBenchmarking.org Average Latency, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 100 - Sensor Count: 200 b a 50 100 150 200 250 198.41 212.43 MAX: 2427.81 MAX: 2080.14
Embree Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Crown b a 0.8408 1.6816 2.5224 3.3632 4.204 SE +/- 0.1040, N = 2 SE +/- 0.0660, N = 2 3.6962 3.7369 MIN: 3.06 / MAX: 4.78 MIN: 3.08 / MAX: 4.7
Xonotic This is a benchmark of Xonotic, which is a fork of the DarkPlaces-based Nexuiz game. Development began in March of 2010 on the Xonotic game for this open-source first person shooter title. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Xonotic 0.8.6 Resolution: 1920 x 1080 - Effects Quality: Low b a 50 100 150 200 250 SE +/- 0.60, N = 2 SE +/- 0.72, N = 2 205.45 205.61 MIN: 95 / MAX: 334 MIN: 99 / MAX: 336
Apache IoTDB OpenBenchmarking.org Average Latency, More Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 500 b a 120 240 360 480 600 485.93 550.47 MAX: 2859.52 MAX: 3020.53
OSPRay Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/pathtracer/real_time b a 12 24 36 48 60 SE +/- 0.02, N = 2 SE +/- 0.59, N = 2 50.08 52.59
Embree Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Asian Dragon Obj b a 0.9376 1.8752 2.8128 3.7504 4.688 SE +/- 0.1882, N = 2 SE +/- 0.0272, N = 2 3.9923 4.1670 MIN: 3.51 / MAX: 5.03 MIN: 3.5 / MAX: 5.07
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: FastestDet a b 1.2735 2.547 3.8205 5.094 6.3675 SE +/- 0.20, N = 2 SE +/- 0.04, N = 2 5.66 5.47 MIN: 5.23 / MAX: 24.86 MIN: 5.21 / MAX: 24.93 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: vision_transformer a b 50 100 150 200 250 SE +/- 3.20, N = 2 SE +/- 3.18, N = 2 240.03 235.51 MIN: 196.33 / MAX: 300.62 MIN: 187.15 / MAX: 291.28 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: regnety_400m b a 3 6 9 12 15 SE +/- 0.07, N = 2 SE +/- 0.01, N = 2 10.41 9.82 MIN: 9.74 / MAX: 26.42 MIN: 9.26 / MAX: 25.33 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: squeezenet_ssd a b 4 8 12 16 20 SE +/- 0.47, N = 2 SE +/- 0.84, N = 2 15.97 15.48 MIN: 13.93 / MAX: 35.68 MIN: 13.66 / MAX: 36.39 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: yolov4-tiny a b 9 18 27 36 45 SE +/- 1.02, N = 2 SE +/- 1.11, N = 2 39.39 39.17 MIN: 36.4 / MAX: 60.27 MIN: 36.39 / MAX: 59.83 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: resnet50 b a 8 16 24 32 40 SE +/- 1.36, N = 2 SE +/- 0.55, N = 2 33.60 33.18 MIN: 30.7 / MAX: 59.9 MIN: 31.12 / MAX: 58.5 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: alexnet b a 3 6 9 12 15 SE +/- 0.04, N = 2 SE +/- 0.03, N = 2 11.69 11.67 MIN: 10.95 / MAX: 28.31 MIN: 11.06 / MAX: 27.75 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: resnet18 a b 3 6 9 12 15 SE +/- 0.04, N = 2 SE +/- 0.04, N = 2 12.73 12.56 MIN: 11.94 / MAX: 28.78 MIN: 11.83 / MAX: 29.03 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: vgg16 a b 20 40 60 80 100 SE +/- 0.23, N = 2 SE +/- 0.11, N = 2 97.70 96.26 MIN: 94.39 / MAX: 117.31 MIN: 93.43 / MAX: 114.68 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: googlenet a b 4 8 12 16 20 SE +/- 0.00, N = 2 SE +/- 0.01, N = 2 16.29 16.19 MIN: 14.94 / MAX: 33.61 MIN: 14.91 / MAX: 32.9 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: blazeface b a 0.216 0.432 0.648 0.864 1.08 SE +/- 0.01, N = 2 SE +/- 0.00, N = 2 0.96 0.94 MIN: 0.8 / MAX: 3.11 MIN: 0.86 / MAX: 3.07 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: efficientnet-b0 b a 3 6 9 12 15 SE +/- 0.02, N = 2 SE +/- 0.07, N = 2 9.34 9.22 MIN: 8.61 / MAX: 25.62 MIN: 8.52 / MAX: 24.96 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: mnasnet b a 1.1835 2.367 3.5505 4.734 5.9175 SE +/- 0.78, N = 2 SE +/- 0.81, N = 2 5.26 5.23 MIN: 4.06 / MAX: 22.06 MIN: 4.05 / MAX: 26.68 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: shufflenet-v2 b a 0.8258 1.6516 2.4774 3.3032 4.129 SE +/- 0.62, N = 2 SE +/- 0.63, N = 2 3.67 3.66 MIN: 2.75 / MAX: 19.08 MIN: 2.74 / MAX: 16.28 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU-v3-v3 - Model: mobilenet-v3 b a 1.1498 2.2996 3.4494 4.5992 5.749 SE +/- 0.80, N = 2 SE +/- 0.77, N = 2 5.11 5.11 MIN: 4.05 / MAX: 26.7 MIN: 4.07 / MAX: 20.14 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU-v2-v2 - Model: mobilenet-v2 b a 2 4 6 8 10 SE +/- 0.65, N = 2 SE +/- 0.67, N = 2 6.67 6.67 MIN: 5.58 / MAX: 27.94 MIN: 5.54 / MAX: 27.83 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: Vulkan GPU - Model: mobilenet a b 7 14 21 28 35 SE +/- 0.19, N = 2 SE +/- 0.57, N = 2 28.15 27.38 MIN: 26.87 / MAX: 48.42 MIN: 24.23 / MAX: 47.57 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OSPRay Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/scivis/real_time b a 0.3091 0.6182 0.9273 1.2364 1.5455 SE +/- 0.00287, N = 2 SE +/- 0.00084, N = 2 1.34467 1.37399
NCNN NCNN is a high performance neural network inference framework optimized for mobile and other platforms developed by Tencent. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: FastestDet b a 1.2488 2.4976 3.7464 4.9952 6.244 SE +/- 0.16, N = 2 SE +/- 0.04, N = 2 5.55 5.39 MIN: 5.23 / MAX: 25.23 MIN: 5.09 / MAX: 24.31 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vision_transformer a b 50 100 150 200 250 SE +/- 0.17, N = 2 SE +/- 5.04, N = 2 233.96 232.17 MIN: 196.52 / MAX: 292.9 MIN: 187.55 / MAX: 291.61 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: regnety_400m b a 3 6 9 12 15 SE +/- 0.04, N = 2 SE +/- 0.14, N = 2 10.12 10.03 MIN: 9.48 / MAX: 25.78 MIN: 8.93 / MAX: 25.51 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: squeezenet_ssd a b 4 8 12 16 20 SE +/- 0.30, N = 2 SE +/- 0.37, N = 2 16.55 15.80 MIN: 15.48 / MAX: 37.44 MIN: 13.81 / MAX: 36.58 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: yolov4-tiny b a 9 18 27 36 45 SE +/- 0.83, N = 2 SE +/- 0.08, N = 2 39.31 38.37 MIN: 36.44 / MAX: 59.39 MIN: 36.54 / MAX: 54.65 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet50 a b 8 16 24 32 40 SE +/- 0.05, N = 2 SE +/- 1.96, N = 2 36.02 34.33 MIN: 31.18 / MAX: 61.6 MIN: 30.77 / MAX: 59.68 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: alexnet a b 3 6 9 12 15 SE +/- 0.06, N = 2 SE +/- 0.07, N = 2 11.68 11.64 MIN: 10.92 / MAX: 27.91 MIN: 10.99 / MAX: 27.75 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: resnet18 b a 3 6 9 12 15 SE +/- 0.11, N = 2 SE +/- 0.04, N = 2 12.62 12.61 MIN: 11.83 / MAX: 28.8 MIN: 11.89 / MAX: 29.06 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: vgg16 b a 20 40 60 80 100 SE +/- 2.34, N = 2 SE +/- 0.26, N = 2 98.75 97.42 MIN: 90.66 / MAX: 1242.52 MIN: 94.57 / MAX: 115.91 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: googlenet a b 4 8 12 16 20 SE +/- 0.08, N = 2 SE +/- 0.10, N = 2 16.23 16.19 MIN: 15.01 / MAX: 32.85 MIN: 14.97 / MAX: 32.25 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: blazeface b a 0.2093 0.4186 0.6279 0.8372 1.0465 SE +/- 0.00, N = 2 SE +/- 0.01, N = 2 0.93 0.91 MIN: 0.8 / MAX: 3.09 MIN: 0.78 / MAX: 3.03 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: efficientnet-b0 a b 3 6 9 12 15 SE +/- 0.65, N = 2 SE +/- 0.02, N = 2 9.78 9.43 MIN: 8.38 / MAX: 32.56 MIN: 8.62 / MAX: 26.5 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: mnasnet a b 2 4 6 8 10 SE +/- 0.02, N = 2 SE +/- 0.02, N = 2 6.04 4.49 MIN: 5.41 / MAX: 26.6 MIN: 4.03 / MAX: 20.38 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: shufflenet-v2 a b 0.9653 1.9306 2.8959 3.8612 4.8265 SE +/- 0.02, N = 2 SE +/- 0.01, N = 2 4.29 3.02 MIN: 3.99 / MAX: 24.93 MIN: 2.73 / MAX: 18.89 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v3-v3 - Model: mobilenet-v3 a b 1.1565 2.313 3.4695 4.626 5.7825 SE +/- 0.80, N = 2 SE +/- 0.04, N = 2 5.14 4.31 MIN: 4.05 / MAX: 26.81 MIN: 4.05 / MAX: 20.17 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU-v2-v2 - Model: mobilenet-v2 a b 2 4 6 8 10 SE +/- 0.70, N = 2 SE +/- 0.07, N = 2 6.69 6.15 MIN: 5.6 / MAX: 28.58 MIN: 5.65 / MAX: 26.38 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
OpenBenchmarking.org ms, Fewer Is Better NCNN 20230517 Target: CPU - Model: mobilenet b a 7 14 21 28 35 SE +/- 0.03, N = 2 SE +/- 0.19, N = 2 27.68 25.93 MIN: 24.92 / MAX: 47.55 MIN: 23.82 / MAX: 45.07 1. (CXX) g++ options: -O3 -rdynamic -lgomp -lpthread
Embree Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer - Model: Asian Dragon b a 0.9491 1.8982 2.8473 3.7964 4.7455 SE +/- 0.0239, N = 2 SE +/- 0.0414, N = 2 4.1998 4.2183 MIN: 3.54 / MAX: 5.43 MIN: 3.54 / MAX: 5.31
SVT-AV1 This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.6 Encoder Mode: Preset 4 - Input: Bosphorus 4K b a 0.2457 0.4914 0.7371 0.9828 1.2285 SE +/- 0.003, N = 2 SE +/- 0.023, N = 2 0.948 1.092 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
VVenC VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Fast b a 0.8786 1.7572 2.6358 3.5144 4.393 SE +/- 0.042, N = 2 SE +/- 0.015, N = 2 3.790 3.905 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
SQLite This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database with a variable number of concurrent repetitions -- up to the maximum number of CPU threads available. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.41.2 Threads / Copies: 4 b a 40 80 120 160 200 SE +/- 0.50, N = 2 SE +/- 17.49, N = 2 173.05 125.83 1. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm
OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.41.2 Threads / Copies: 2 b a 40 80 120 160 200 SE +/- 1.42, N = 2 SE +/- 24.93, N = 2 170.01 123.47 1. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm
Embree Intel Embree is a collection of high-performance ray-tracing kernels for execution on CPUs (and GPUs via SYCL) and supporting instruction sets such as SSE, AVX, AVX2, and AVX-512. Embree also supports making use of the Intel SPMD Program Compiler (ISPC). Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better Embree 4.1 Binary: Pathtracer ISPC - Model: Asian Dragon b a 1.0774 2.1548 3.2322 4.3096 5.387 SE +/- 0.0364, N = 2 SE +/- 0.0697, N = 2 4.7636 4.7883 MIN: 4.05 / MAX: 5.98 MIN: 4.05 / MAX: 5.89
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering - Scenario: Asynchronous Multi-Stream a b 160 320 480 640 800 SE +/- 113.73, N = 2 SE +/- 2.10, N = 2 764.27 614.68
libxsmm Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 64 b a 20 40 60 80 100 SE +/- 4.80, N = 2 SE +/- 4.45, N = 2 90.8 90.9 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2
libxsmm Libxsmm is an open-source library for specialized dense and sparse matrix operations and deep learning primitives. Libxsmm supports making use of Intel AMX, AVX-512, and other modern CPU instruction set capabilities. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org GFLOPS/s, More Is Better libxsmm 2-1.17-3645 M N K: 32 a b 11 22 33 44 55 SE +/- 0.35, N = 2 SE +/- 0.55, N = 2 47.7 48.0 1. (CXX) g++ options: -dynamic -Bstatic -static-libgcc -lgomp -lm -lrt -ldl -lquadmath -lstdc++ -pthread -fPIC -std=c++14 -O2 -fopenmp-simd -funroll-loops -ftree-vectorize -fdata-sections -ffunction-sections -fvisibility=hidden -march=core-avx2
OSPRay Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: particle_volume/ao/real_time b a 0.336 0.672 1.008 1.344 1.68 SE +/- 0.02805, N = 2 SE +/- 0.09686, N = 2 1.36920 1.49334
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: BERT-Large, NLP Question Answering, Sparse INT8 - Scenario: Asynchronous Multi-Stream a b 20 40 60 80 100 SE +/- 0.32, N = 2 SE +/- 0.22, N = 2 93.85 58.24
Apache IoTDB OpenBenchmarking.org Average Latency, More Is Better Apache IoTDB 1.1.2 Device Count: 500 - Batch Size Per Write: 1 - Sensor Count: 500 b a 20 40 60 80 100 70.46 75.40 MAX: 1429.13 MAX: 1528.52
OSPRay Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/scivis/real_time a b 0.1597 0.3194 0.4791 0.6388 0.7985 SE +/- 0.014296, N = 2 SE +/- 0.025213, N = 2 0.622182 0.709986
Apache IoTDB OpenBenchmarking.org Average Latency, More Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 500 a b 80 160 240 320 400 342.21 387.15 MAX: 2349.14 MAX: 2248.18
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 500 - Set To Get Ratio: 1:5 a b 300K 600K 900K 1200K 1500K SE +/- 8374.42, N = 2 SE +/- 25673.44, N = 2 890374.23 1248610.70 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Memcached Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:5 b a 110K 220K 330K 440K 550K SE +/- 13966.62, N = 2 SE +/- 12286.93, N = 2 518308.20 525368.39 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:10 a b 300K 600K 900K 1200K 1500K SE +/- 568.99, N = 2 SE +/- 8107.22, N = 2 1028641.19 1377833.48 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 100 - Set To Get Ratio: 1:5 a b 300K 600K 900K 1200K 1500K SE +/- 44860.53, N = 2 SE +/- 39391.55, N = 2 1016703.27 1372033.18 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Redis 7.0.12 + memtier_benchmark 2.0 Protocol: Redis - Clients: 50 - Set To Get Ratio: 1:5 a b 300K 600K 900K 1200K 1500K SE +/- 43465.56, N = 2 SE +/- 4729.88, N = 2 1039311.12 1398883.01 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
Memcached Memcached is a high performance, distributed memory object caching system. This Memcached test profiles makes use of memtier_benchmark for excuting this CPU/memory-focused server benchmark. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:10 b a 110K 220K 330K 440K 550K SE +/- 4490.17, N = 2 SE +/- 9547.84, N = 2 486807.49 502050.43 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OpenBenchmarking.org Ops/sec, More Is Better Memcached 1.6.19 Set To Get Ratio: 1:100 b a 110K 220K 330K 440K 550K SE +/- 14781.65, N = 2 SE +/- 10780.90, N = 2 485922.63 496309.76 1. (CXX) g++ options: -O2 -levent_openssl -levent -lcrypto -lssl -lpthread -lz -lpcre
OSPRay Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/ao/real_time b a 0.1677 0.3354 0.5031 0.6708 0.8385 SE +/- 0.008961, N = 2 SE +/- 0.009807, N = 2 0.734783 0.745146
SVT-AV1 This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.6 Encoder Mode: Preset 8 - Input: Bosphorus 4K b a 3 6 9 12 15 SE +/- 1.160, N = 2 SE +/- 0.307, N = 2 9.436 9.490 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream b a 7 14 21 28 35 SE +/- 0.13, N = 2 SE +/- 0.17, N = 2 30.57 30.54
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2, Sparse INT8 - Scenario: Asynchronous Multi-Stream b a 15 30 45 60 75 SE +/- 0.29, N = 2 SE +/- 0.37, N = 2 65.36 65.43
VVenC VVenC is the Fraunhofer Versatile Video Encoder as a fast/efficient H.266/VVC encoder. The vvenc encoder makes use of SIMD Everywhere (SIMDe). The vvenc software is published under the Clear BSD License. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better VVenC 1.9 Video Input: Bosphorus 1080p - Video Preset: Faster b a 3 6 9 12 15 SE +/- 0.603, N = 2 SE +/- 0.090, N = 2 9.868 9.882 1. (CXX) g++ options: -O3 -flto -fno-fat-lto-objects -flto=auto
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, BERT base uncased SST2 - Scenario: Asynchronous Multi-Stream a b 50 100 150 200 250 SE +/- 13.47, N = 2 SE +/- 3.79, N = 2 211.98 179.08
OSPRay Intel OSPRay is a portable ray-tracing engine for high-performance, high-fidelity scientific visualizations. OSPRay builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Items Per Second, More Is Better OSPRay 2.12 Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_time b a 0.2389 0.4778 0.7167 0.9556 1.1945 SE +/- 0.016867, N = 2 SE +/- 0.011835, N = 2 1.012953 1.061640
SQLite This is a simple benchmark of SQLite. At present this test profile just measures the time to perform a pre-defined number of insertions on an indexed database with a variable number of concurrent repetitions -- up to the maximum number of CPU threads available. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better SQLite 3.41.2 Threads / Copies: 1 b a 20 40 60 80 100 SE +/- 0.19, N = 2 SE +/- 0.86, N = 2 88.59 30.33 1. (CC) gcc options: -O2 -lreadline -ltermcap -lz -lm
Apache IoTDB OpenBenchmarking.org Average Latency, More Is Better Apache IoTDB 1.1.2 Device Count: 200 - Batch Size Per Write: 100 - Sensor Count: 200 b a 30 60 90 120 150 143.24 144.53 MAX: 2073.51 MAX: 1992.27
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b 140 280 420 560 700 SE +/- 12.84, N = 2 SE +/- 79.47, N = 2 669.18 628.36
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: CV Segmentation, 90% Pruned YOLACT Pruned - Scenario: Asynchronous Multi-Stream a b 0.7272 1.4544 2.1816 2.9088 3.636 SE +/- 0.0495, N = 2 SE +/- 0.4116, N = 2 2.9815 3.2320
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b 200 400 600 800 1000 SE +/- 0.29, N = 2 SE +/- 13.19, N = 2 979.18 835.85
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Token Classification, BERT base uncased conll2003 - Scenario: Asynchronous Multi-Stream a b 0.5374 1.0748 1.6122 2.1496 2.687 SE +/- 0.0085, N = 2 SE +/- 0.0327, N = 2 2.0309 2.3883
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream b a 50 100 150 200 250 SE +/- 32.45, N = 2 SE +/- 31.56, N = 2 245.64 244.56
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Question Answering, BERT base uncased SQuaD 12layer Pruned90 - Scenario: Asynchronous Multi-Stream b a 2 4 6 8 10 SE +/- 1.0939, N = 2 SE +/- 1.0816, N = 2 8.2832 8.3009
OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream b a 16 32 48 64 80 SE +/- 6.64, N = 2 SE +/- 5.54, N = 2 70.82 69.56
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Sentiment Analysis, 80% Pruned Quantized BERT Base Uncased - Scenario: Asynchronous Multi-Stream b a 7 14 21 28 35 SE +/- 2.67, N = 2 SE +/- 2.30, N = 2 28.46 28.92
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream b a 200 400 600 800 1000 SE +/- 86.04, N = 2 SE +/- 75.66, N = 2 864.99 855.68
OpenBenchmarking.org items/sec, More Is Better Neural Magic DeepSparse 1.5 Model: NLP Document Classification, oBERT base uncased on IMDB - Scenario: Asynchronous Multi-Stream b a 0.5294 1.0588 1.5882 2.1176 2.647 SE +/- 0.2323, N = 2 SE +/- 0.2053, N = 2 2.3352 2.3527
Neural Magic DeepSparse OpenBenchmarking.org ms/batch, Fewer Is Better Neural Magic DeepSparse 1.5 Model: NLP Text Classification, DistilBERT mnli - Scenario: Asynchronous Multi-Stream a b 30 60 90 120 150 SE +/- 15.66, N = 2 SE +/- 14.87, N = 2 116.27 98.71
SVT-AV1 This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.6 Encoder Mode: Preset 4 - Input: Bosphorus 1080p a b 1.0739 2.1478 3.2217 4.2956 5.3695 SE +/- 0.495, N = 2 SE +/- 0.109, N = 2 4.260 4.773 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
Opus Codec Encoding Opus is an open audio codec. Opus is a lossy audio compression format designed primarily for interactive real-time applications over the Internet. This test uses Opus-Tools and measures the time required to encode a WAV file to Opus five times. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Seconds, Fewer Is Better Opus Codec Encoding 1.4 WAV To Opus Encode b a 8 16 24 32 40 SE +/- 0.06, N = 2 SE +/- 0.10, N = 2 35.43 35.40 1. (CXX) g++ options: -O3 -fvisibility=hidden -logg -lm
Apache IoTDB OpenBenchmarking.org Average Latency, More Is Better Apache IoTDB 1.1.2 Device Count: 100 - Batch Size Per Write: 100 - Sensor Count: 200 b a 20 40 60 80 100 104.43 107.84 MAX: 2007.51 MAX: 1809.87
QuantLib QuantLib is an open-source library/framework around quantitative finance for modeling, trading and risk management scenarios. QuantLib is written in C++ with Boost and its built-in benchmark used reports the QuantLib Benchmark Index benchmark score. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org MFLOPS, More Is Better QuantLib 1.30 a b 500 1000 1500 2000 2500 SE +/- 210.20, N = 2 SE +/- 234.25, N = 2 2379.8 2404.8 1. (CXX) g++ options: -O3 -march=native -fPIE -pie
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Cloning a b 150 300 450 600 750 SE +/- 28.62, N = 2 SE +/- 2.47, N = 2 669.41 715.58 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: MMAP a b 7 14 21 28 35 SE +/- 3.19, N = 2 SE +/- 3.70, N = 2 20.86 28.46 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: MEMFD b a 13 26 39 52 65 SE +/- 0.72, N = 2 SE +/- 0.79, N = 2 43.09 56.59 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Zlib a b 70 140 210 280 350 SE +/- 31.94, N = 2 SE +/- 21.49, N = 2 273.05 335.91 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Pipe a b 300K 600K 900K 1200K 1500K SE +/- 30380.48, N = 2 SE +/- 131648.17, N = 2 1422109.84 1533355.11 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Atomic a b 50 100 150 200 250 SE +/- 10.78, N = 2 SE +/- 11.52, N = 2 224.51 249.99 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: NUMA a b 14 28 42 56 70 SE +/- 5.63, N = 2 SE +/- 5.27, N = 2 53.95 60.49 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Pthread a b 9K 18K 27K 36K 45K SE +/- 4732.28, N = 2 SE +/- 4375.87, N = 2 34974.26 42229.61 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: x86_64 RdRand b a 700 1400 2100 2800 3500 SE +/- 68.45, N = 2 SE +/- 13.37, N = 2 3134.04 3267.31 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Function Call a b 500 1000 1500 2000 2500 SE +/- 96.37, N = 2 SE +/- 40.41, N = 2 2058.70 2164.20 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: System V Message Passing b a 600K 1200K 1800K 2400K 3000K SE +/- 200089.21, N = 2 SE +/- 119752.37, N = 2 2519916.10 2652462.62 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Socket Activity a b 500 1000 1500 2000 2500 SE +/- 249.85, N = 2 SE +/- 271.08, N = 2 2495.57 2508.63 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Matrix 3D Math a b 90 180 270 360 450 SE +/- 5.36, N = 2 SE +/- 6.17, N = 2 383.49 396.03 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 512 a b 8M 16M 24M 32M 40M SE +/- 3468000.00, N = 2 SE +/- 2570000.00, N = 2 35719000 36523000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 512 b a 6M 12M 18M 24M 30M SE +/- 478500.00, N = 2 SE +/- 466500.00, N = 2 25812500 25830500 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 512 b a 3M 6M 9M 12M 15M SE +/- 149500.00, N = 2 SE +/- 158500.00, N = 2 15023500 15179500 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 32 a b 10M 20M 30M 40M 50M SE +/- 451500.00, N = 2 SE +/- 77000.00, N = 2 46516500 46825000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Floating Point a b 160 320 480 640 800 SE +/- 25.00, N = 2 SE +/- 25.67, N = 2 743.83 762.86 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Hash a b 140K 280K 420K 560K 700K SE +/- 38337.11, N = 2 SE +/- 38690.57, N = 2 523699.70 638650.51 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 512 b a 1.7M 3.4M 5.1M 6.8M 8.5M SE +/- 20850.00, N = 2 SE +/- 8100.00, N = 2 7710850 7721800 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 57 b a 16M 32M 48M 64M 80M SE +/- 620500.00, N = 2 SE +/- 128500.00, N = 2 73620500 73795500 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 2 - Buffer Length: 256 - Filter Length: 32 a b 20M 40M 60M 80M 100M SE +/- 873500.00, N = 2 SE +/- 1161000.00, N = 2 85550500 85664000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 1 - Buffer Length: 256 - Filter Length: 57 b a 9M 18M 27M 36M 45M SE +/- 154500.00, N = 2 SE +/- 225000.00, N = 2 43427500 43429000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 32 a b 40M 80M 120M 160M 200M SE +/- 11475000.00, N = 2 SE +/- 5480000.00, N = 2 177945000 185540000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: CPU Cache a b 200K 400K 600K 800K 1000K SE +/- 244998.38, N = 2 SE +/- 138960.51, N = 2 929178.43 1153192.75 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
Liquid-DSP LiquidSDR's Liquid-DSP is a software-defined radio (SDR) digital signal processing library. This test profile runs a multi-threaded benchmark of this SDR/DSP library focused on embedded platform usage. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 8 - Buffer Length: 256 - Filter Length: 57 b a 30M 60M 90M 120M 150M SE +/- 7270000.00, N = 2 SE +/- 5615000.00, N = 2 132830000 140345000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 57 b a 30M 60M 90M 120M 150M SE +/- 3395000.00, N = 2 SE +/- 6360000.00, N = 2 112205000 119590000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org samples/s, More Is Better Liquid-DSP 1.6 Threads: 4 - Buffer Length: 256 - Filter Length: 32 b a 30M 60M 90M 120M 150M SE +/- 2610000.00, N = 2 SE +/- 2650000.00, N = 2 139950000 140280000 1. (CC) gcc options: -O3 -pthread -lm -lc -lliquid
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Mutex a b 150K 300K 450K 600K 750K SE +/- 76155.40, N = 2 SE +/- 20730.70, N = 2 604156.72 694562.47 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Glibc Qsort Data Sorting a b 16 32 48 64 80 SE +/- 5.90, N = 2 SE +/- 3.74, N = 2 69.13 74.13 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Matrix Math b a 4K 8K 12K 16K 20K SE +/- 854.63, N = 2 SE +/- 1025.15, N = 2 17116.79 17318.70 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: CPU Stress a b 2K 4K 6K 8K 10K SE +/- 567.82, N = 2 SE +/- 158.65, N = 2 7310.05 7866.40 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: SENDFILE a b 9K 18K 27K 36K 45K SE +/- 1669.87, N = 2 SE +/- 1151.61, N = 2 36779.86 43752.37 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Crypto a b 1200 2400 3600 4800 6000 SE +/- 47.72, N = 2 SE +/- 96.10, N = 2 4590.53 5684.53 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Wide Vector Math a b 30K 60K 90K 120K 150K SE +/- 2993.76, N = 2 SE +/- 3329.63, N = 2 147314.85 148388.79 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Poll a b 70K 140K 210K 280K 350K SE +/- 20478.34, N = 2 SE +/- 23847.48, N = 2 284300.80 318466.50 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Glibc C String Functions b a 500K 1000K 1500K 2000K 2500K SE +/- 158119.57, N = 2 SE +/- 92139.37, N = 2 2360589.10 2485856.07 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Fused Multiply-Add b a 600K 1200K 1800K 2400K 3000K SE +/- 262560.86, N = 2 SE +/- 63230.65, N = 2 2737483.30 2978232.73 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Context Switching a b 160K 320K 480K 640K 800K SE +/- 53162.51, N = 2 SE +/- 16234.88, N = 2 705297.96 733269.79 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Vector Math b a 3K 6K 9K 12K 15K SE +/- 913.67, N = 2 SE +/- 425.64, N = 2 12444.33 12666.19 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Semaphores a b 700K 1400K 2100K 2800K 3500K SE +/- 287315.12, N = 2 SE +/- 70160.30, N = 2 3218725.51 3241408.21 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
OpenBenchmarking.org Bogo Ops/s, More Is Better Stress-NG 0.15.10 Test: Futex a b 130K 260K 390K 520K 650K SE +/- 36419.89, N = 2 SE +/- 57507.32, N = 2 486078.68 624014.17 1. (CXX) g++ options: -lm -laio -lapparmor -latomic -lc -lcrypt -ldl -lEGL -lgbm -lGLESv2 -ljpeg -lpthread -lrt -lsctp -lz
SVT-AV1 This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.6 Encoder Mode: Preset 12 - Input: Bosphorus 4K b a 8 16 24 32 40 SE +/- 0.81, N = 2 SE +/- 0.91, N = 2 28.98 35.50 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.6 Encoder Mode: Preset 13 - Input: Bosphorus 4K b a 8 16 24 32 40 SE +/- 2.71, N = 2 SE +/- 2.10, N = 2 30.98 34.05 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.6 Encoder Mode: Preset 8 - Input: Bosphorus 1080p b a 8 16 24 32 40 SE +/- 2.58, N = 2 SE +/- 0.87, N = 2 31.01 35.74 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
SVT-AV1 This is a benchmark of the SVT-AV1 open-source video encoder/decoder. SVT-AV1 was originally developed by Intel as part of their Open Visual Cloud / Scalable Video Technology (SVT). Development of SVT-AV1 has since moved to the Alliance for Open Media as part of upstream AV1 development. SVT-AV1 is a CPU-based multi-threaded video encoder for the AV1 video format with a sample YUV video file. Learn more via the OpenBenchmarking.org test page.
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.6 Encoder Mode: Preset 12 - Input: Bosphorus 1080p a b 30 60 90 120 150 SE +/- 0.24, N = 2 SE +/- 0.70, N = 2 154.60 157.69 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
OpenBenchmarking.org Frames Per Second, More Is Better SVT-AV1 1.6 Encoder Mode: Preset 13 - Input: Bosphorus 1080p a b 50 100 150 200 250 SE +/- 0.66, N = 2 SE +/- 0.84, N = 2 202.85 206.77 1. (CXX) g++ options: -march=native -mno-avx -mavx2 -mavx512f -mavx512bw -mavx512dq
a Processor: Intel Core i7-8565U @ 4.60GHz (4 Cores / 8 Threads), Motherboard: Dell 0KTW76 (1.17.0 BIOS), Chipset: Intel Cannon Point-LP, Memory: 16GB, Disk: SK hynix PC401 NVMe 256GB, Graphics: Intel UHD 620 WHL GT2 15GB (1150MHz), Audio: Realtek ALC3271, Network: Qualcomm Atheros QCA6174 802.11ac
OS: Ubuntu 22.04, Kernel: 5.19.0-rc6-phx-retbleed (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.0.1, OpenCL: OpenCL 3.0, Vulkan: 1.3.204, Compiler: GCC 11.3.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-xKiWfi/gcc-11-11.3.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xf0 - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.18+10-post-Ubuntu-0ubuntu122.04)Python Notes: Python 3.10.6Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of IBRS IBPB: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Not affected
Testing initiated at 5 August 2023 20:16 by user phoronix.
b Processor: Intel Core i7-8565U @ 4.60GHz (4 Cores / 8 Threads), Motherboard: Dell 0KTW76 (1.17.0 BIOS), Chipset: Intel Cannon Point-LP, Memory: 16GB, Disk: SK hynix PC401 NVMe 256GB, Graphics: Intel UHD 620 WHL GT2 15GB (1150MHz), Audio: Realtek ALC3271, Network: Qualcomm Atheros QCA6174 802.11ac
OS: Ubuntu 22.04, Kernel: 5.19.0-rc6-phx-retbleed (x86_64), Desktop: GNOME Shell 42.2, Display Server: X Server + Wayland, OpenGL: 4.6 Mesa 22.0.1, OpenCL: OpenCL 3.0, Vulkan: 1.3.204, Compiler: GCC 11.4.0, File-System: ext4, Screen Resolution: 1920x1080
Kernel Notes: Transparent Huge Pages: madviseCompiler Notes: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-XeT9lY/gcc-11-11.4.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -vDisk Notes: NONE / errors=remount-ro,relatime,rw / Block Size: 4096Processor Notes: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0xf0 - Thermald 2.4.9Java Notes: OpenJDK Runtime Environment (build 11.0.20+8-post-Ubuntu-1ubuntu122.04)Python Notes: Python 3.10.12Security Notes: itlb_multihit: KVM: Mitigation of VMX disabled + l1tf: Not affected + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Not affected + mmio_stale_data: Mitigation of Clear buffers; SMT vulnerable + retbleed: Mitigation of IBRS + spec_store_bypass: Mitigation of SSB disabled via prctl + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of IBRS IBPB: conditional RSB filling + srbds: Mitigation of Microcode + tsx_async_abort: Not affected
Testing initiated at 6 August 2023 08:16 by user phoronix.