Intel Core i5-5300U testing with a HP 2216 (M71 Ver. 01.27 BIOS) and Intel HD 5500 3GB on Ubuntu 20.04 via the Phoronix Test Suite.
Compare your own system(s) to this result file with the
Phoronix Test Suite by running the command:
phoronix-test-suite benchmark 2008300-FI-COREI5LAP35 Core i5 Laptop - Phoronix Test Suite Core i5 Laptop Intel Core i5-5300U testing with a HP 2216 (M71 Ver. 01.27 BIOS) and Intel HD 5500 3GB on Ubuntu 20.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2008300-FI-COREI5LAP35&grr .
Core i5 Laptop Processor Motherboard Chipset Memory Disk Graphics Audio Network OS Kernel Desktop Display Server Display Driver OpenGL Compiler File-System Screen Resolution Run 1 Run 2 Run 3 Intel Core i5-5300U @ 2.90GHz (2 Cores / 4 Threads) HP 2216 (M71 Ver. 01.27 BIOS) Intel Broadwell-U-OPI 8GB 256GB MTFDDAK256MAM-1K Intel HD 5500 3GB (900MHz) Intel Broadwell-U Audio Intel I218-LM + Intel 7265 Ubuntu 20.04 5.4.0-33-generic (x86_64) GNOME Shell 3.36.1 X Server 1.20.8 modesetting 1.20.8 4.6 Mesa 20.0.4 GCC 9.3.0 ext4 1366x768 GNOME Shell 3.36.4 OpenBenchmarking.org Compiler Details - --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none,hsa --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details - Scaling Governor: intel_pstate powersave - CPU Microcode: 0x2e Python Details - Python 3.8.2 Security Details - itlb_multihit: KVM: Mitigation of Split huge pages + l1tf: Mitigation of PTE Inversion; VMX: conditional cache flushes SMT vulnerable + mds: Mitigation of Clear buffers; SMT vulnerable + meltdown: Mitigation of PTI + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Full generic retpoline IBPB: conditional IBRS_FW STIBP: conditional RSB filling + tsx_async_abort: Mitigation of Clear buffers; SMT vulnerable
Core i5 Laptop rodinia: OpenMP LavaMD astcenc: Exhaustive ecp-candle: P3B1 rodinia: OpenMP Leukocyte build-linux-kernel: Time To Compile namd: ATPase Simulation - 327,506 Atoms avifenc: 0 avifenc: 2 rodinia: OpenMP HotSpot3D astcenc: Thorough tensorflow-lite: Inception V4 tensorflow-lite: Inception ResNet V2 daphne: OpenMP - Points2Image montage: Mosaic of M17, K band, 1.5 deg x 1.5 deg rodinia: OpenMP CFD Solver geekbench: CPU Multi Core - Horizon Detection geekbench: CPU Multi Core - Face Detection geekbench: CPU Multi Core - Gaussian Blur geekbench: CPU Multi Core onednn: Recurrent Neural Network Training - f32 - CPU build-apache: Time To Compile tensorflow-lite: SqueezeNet geekbench: CPU Single Core - Horizon Detection geekbench: CPU Single Core - Face Detection geekbench: CPU Single Core - Gaussian Blur geekbench: CPU Single Core tensorflow-lite: Mobilenet Float tensorflow-lite: NASNet Mobile tensorflow-lite: Mobilenet Quant onednn: IP Batch All - f32 - CPU daphne: OpenMP - Euclidean Cluster rodinia: OpenMP Streamcluster daphne: OpenMP - NDT Mapping onednn: Recurrent Neural Network Inference - f32 - CPU astcenc: Medium onednn: Matrix Multiply Batch Shapes Transformer - f32 - CPU ecp-candle: P1B2 onednn: Deconvolution Batch deconv_1d - f32 - CPU avifenc: 8 astcenc: Fast onednn: IP Batch 1D - f32 - CPU avifenc: 10 onednn: Convolution Batch Shapes Auto - f32 - CPU onednn: Deconvolution Batch deconv_3d - f32 - CPU Run 1 Run 2 Run 3 1838.834 1600.33 2586.135 645.608 567.770 11.13000 489.475 284.941 226.859 196.20 20827933 18852000 14193.871976404 122.011 124.405 43.2 12.7 77.8 1633 1554.43 74.292 1439837 19.3 5.86 32.8 789 976575 1007580 947522 307.090 436.88 47.078 427.68 810.738 28.63 10.7684 74.838 27.8569 16.642 10.49 23.2964 14.734 34.9274 37.9406 1918.784 1601.44 2551.636 643.287 564.364 11.13717 476.985 283.327 225.704 196.44 20848367 18851967 14197.078561502 122.223 116.727 42.9 12.6 77.6 1630 1532.18 73.879 1440517 19.2 5.92 32.1 785 976629 1007653 944925 304.020 436.58 46.792 427.55 811.325 28.65 10.2545 72.77 27.7555 16.641 10.49 22.9018 14.740 33.8130 37.7286 1841.514 1601.56 2549.293 643.340 569.985 11.12933 477.142 282.661 227.506 196.17 20848033 18853200 14118.098715708 122.073 116.121 43.1 12.7 77.8 1630 1628.52 74.871 1439717 19.3 5.89 31.7 785 976604 1008210 944543 311.404 435.60 47.022 427.87 824.358 28.62 11.0660 74.816 27.7758 16.648 10.50 22.7773 14.749 35.0953 37.8419 OpenBenchmarking.org
Rodinia Test: OpenMP LavaMD OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP LavaMD Run 1 Run 2 Run 3 400 800 1200 1600 2000 SE +/- 1.35, N = 3 SE +/- 64.46, N = 9 SE +/- 1.64, N = 3 1838.83 1918.78 1841.51 1. (CXX) g++ options: -O2 -lOpenCL
ASTC Encoder Preset: Exhaustive OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.0 Preset: Exhaustive Run 1 Run 2 Run 3 300 600 900 1200 1500 SE +/- 1.31, N = 3 SE +/- 0.24, N = 3 SE +/- 0.83, N = 3 1600.33 1601.44 1601.56 1. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
ECP-CANDLE Benchmark: P3B1 OpenBenchmarking.org Seconds, Fewer Is Better ECP-CANDLE 0.3 Benchmark: P3B1 Run 1 Run 2 Run 3 600 1200 1800 2400 3000 2586.14 2551.64 2549.29
Rodinia Test: OpenMP Leukocyte OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP Leukocyte Run 1 Run 2 Run 3 140 280 420 560 700 SE +/- 1.40, N = 3 SE +/- 1.18, N = 3 SE +/- 0.92, N = 3 645.61 643.29 643.34 1. (CXX) g++ options: -O2 -lOpenCL
Timed Linux Kernel Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Linux Kernel Compilation 5.4 Time To Compile Run 1 Run 2 Run 3 120 240 360 480 600 SE +/- 1.11, N = 3 SE +/- 1.24, N = 3 SE +/- 1.19, N = 3 567.77 564.36 569.99
NAMD ATPase Simulation - 327,506 Atoms OpenBenchmarking.org days/ns, Fewer Is Better NAMD 2.14 ATPase Simulation - 327,506 Atoms Run 1 Run 2 Run 3 3 6 9 12 15 SE +/- 0.02, N = 3 SE +/- 0.00, N = 3 SE +/- 0.02, N = 3 11.13 11.14 11.13
libavif avifenc Encoder Speed: 0 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.7.3 Encoder Speed: 0 Run 1 Run 2 Run 3 110 220 330 440 550 SE +/- 5.93, N = 3 SE +/- 0.60, N = 3 SE +/- 0.39, N = 3 489.48 476.99 477.14 1. (CXX) g++ options: -O3 -fPIC
libavif avifenc Encoder Speed: 2 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.7.3 Encoder Speed: 2 Run 1 Run 2 Run 3 60 120 180 240 300 SE +/- 0.43, N = 3 SE +/- 0.11, N = 3 SE +/- 0.03, N = 3 284.94 283.33 282.66 1. (CXX) g++ options: -O3 -fPIC
Rodinia Test: OpenMP HotSpot3D OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP HotSpot3D Run 1 Run 2 Run 3 50 100 150 200 250 SE +/- 1.00, N = 3 SE +/- 0.94, N = 3 SE +/- 0.26, N = 3 226.86 225.70 227.51 1. (CXX) g++ options: -O2 -lOpenCL
ASTC Encoder Preset: Thorough OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.0 Preset: Thorough Run 1 Run 2 Run 3 40 80 120 160 200 SE +/- 0.12, N = 3 SE +/- 0.20, N = 3 SE +/- 0.05, N = 3 196.20 196.44 196.17 1. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
TensorFlow Lite Model: Inception V4 OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2020-08-23 Model: Inception V4 Run 1 Run 2 Run 3 4M 8M 12M 16M 20M SE +/- 3349.79, N = 3 SE +/- 19070.95, N = 3 SE +/- 17197.80, N = 3 20827933 20848367 20848033
TensorFlow Lite Model: Inception ResNet V2 OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2020-08-23 Model: Inception ResNet V2 Run 1 Run 2 Run 3 4M 8M 12M 16M 20M SE +/- 1331.67, N = 3 SE +/- 5166.99, N = 3 SE +/- 1823.00, N = 3 18852000 18851967 18853200
Darmstadt Automotive Parallel Heterogeneous Suite Backend: OpenMP - Kernel: Points2Image OpenBenchmarking.org Test Cases Per Minute, More Is Better Darmstadt Automotive Parallel Heterogeneous Suite Backend: OpenMP - Kernel: Points2Image Run 1 Run 2 Run 3 3K 6K 9K 12K 15K SE +/- 20.11, N = 3 SE +/- 6.44, N = 3 SE +/- 7.85, N = 3 14193.87 14197.08 14118.10 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp
Montage Astronomical Image Mosaic Engine Mosaic of M17, K band, 1.5 deg x 1.5 deg OpenBenchmarking.org Seconds, Fewer Is Better Montage Astronomical Image Mosaic Engine 6.0 Mosaic of M17, K band, 1.5 deg x 1.5 deg Run 1 Run 2 Run 3 30 60 90 120 150 SE +/- 0.11, N = 3 SE +/- 0.09, N = 3 SE +/- 0.05, N = 3 122.01 122.22 122.07 1. (CC) gcc options: -std=gnu99 -lcfitsio -lm -O2
Rodinia Test: OpenMP CFD Solver OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP CFD Solver Run 1 Run 2 Run 3 30 60 90 120 150 SE +/- 0.50, N = 3 SE +/- 1.05, N = 3 SE +/- 1.09, N = 3 124.41 116.73 116.12 1. (CXX) g++ options: -O2 -lOpenCL
Geekbench Test: CPU Multi Core - Horizon Detection OpenBenchmarking.org Gpixels/sec, More Is Better Geekbench 5 Test: CPU Multi Core - Horizon Detection Run 1 Run 2 Run 3 10 20 30 40 50 SE +/- 0.03, N = 3 SE +/- 0.21, N = 3 SE +/- 0.00, N = 3 43.2 42.9 43.1
Geekbench Test: CPU Multi Core - Face Detection OpenBenchmarking.org images/sec, More Is Better Geekbench 5 Test: CPU Multi Core - Face Detection Run 1 Run 2 Run 3 3 6 9 12 15 SE +/- 0.00, N = 3 SE +/- 0.03, N = 3 SE +/- 0.03, N = 3 12.7 12.6 12.7
Geekbench Test: CPU Multi Core - Gaussian Blur OpenBenchmarking.org Mpixels/sec, More Is Better Geekbench 5 Test: CPU Multi Core - Gaussian Blur Run 1 Run 2 Run 3 20 40 60 80 100 SE +/- 0.19, N = 3 SE +/- 0.15, N = 3 SE +/- 0.18, N = 3 77.8 77.6 77.8
Geekbench Test: CPU Multi Core OpenBenchmarking.org Score, More Is Better Geekbench 5 Test: CPU Multi Core Run 1 Run 2 Run 3 400 800 1200 1600 2000 SE +/- 1.33, N = 3 SE +/- 2.19, N = 3 1633 1630 1630
oneDNN Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 1.5 Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPU Run 1 Run 2 Run 3 300 600 900 1200 1500 SE +/- 23.16, N = 3 SE +/- 9.40, N = 3 SE +/- 19.04, N = 15 1554.43 1532.18 1628.52 MIN: 1515.46 MIN: 1501.04 MIN: 1523.42 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
Timed Apache Compilation Time To Compile OpenBenchmarking.org Seconds, Fewer Is Better Timed Apache Compilation 2.4.41 Time To Compile Run 1 Run 2 Run 3 20 40 60 80 100 SE +/- 0.07, N = 3 SE +/- 0.05, N = 3 SE +/- 0.04, N = 3 74.29 73.88 74.87
TensorFlow Lite Model: SqueezeNet OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2020-08-23 Model: SqueezeNet Run 1 Run 2 Run 3 300K 600K 900K 1200K 1500K SE +/- 221.84, N = 3 SE +/- 632.46, N = 3 SE +/- 29.63, N = 3 1439837 1440517 1439717
Geekbench Test: CPU Single Core - Horizon Detection OpenBenchmarking.org Gpixels/sec, More Is Better Geekbench 5 Test: CPU Single Core - Horizon Detection Run 1 Run 2 Run 3 5 10 15 20 25 SE +/- 0.06, N = 3 SE +/- 0.09, N = 3 SE +/- 0.03, N = 3 19.3 19.2 19.3
Geekbench Test: CPU Single Core - Face Detection OpenBenchmarking.org images/sec, More Is Better Geekbench 5 Test: CPU Single Core - Face Detection Run 1 Run 2 Run 3 1.332 2.664 3.996 5.328 6.66 SE +/- 0.07, N = 3 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 5.86 5.92 5.89
Geekbench Test: CPU Single Core - Gaussian Blur OpenBenchmarking.org Mpixels/sec, More Is Better Geekbench 5 Test: CPU Single Core - Gaussian Blur Run 1 Run 2 Run 3 8 16 24 32 40 SE +/- 1.39, N = 3 SE +/- 0.37, N = 3 SE +/- 0.34, N = 3 32.8 32.1 31.7
Geekbench Test: CPU Single Core OpenBenchmarking.org Score, More Is Better Geekbench 5 Test: CPU Single Core Run 1 Run 2 Run 3 200 400 600 800 1000 SE +/- 0.67, N = 3 SE +/- 0.58, N = 3 SE +/- 1.00, N = 3 789 785 785
TensorFlow Lite Model: Mobilenet Float OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2020-08-23 Model: Mobilenet Float Run 1 Run 2 Run 3 200K 400K 600K 800K 1000K SE +/- 151.72, N = 3 SE +/- 93.51, N = 3 SE +/- 73.18, N = 3 976575 976629 976604
TensorFlow Lite Model: NASNet Mobile OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2020-08-23 Model: NASNet Mobile Run 1 Run 2 Run 3 200K 400K 600K 800K 1000K SE +/- 283.08, N = 3 SE +/- 255.36, N = 3 SE +/- 152.75, N = 3 1007580 1007653 1008210
TensorFlow Lite Model: Mobilenet Quant OpenBenchmarking.org Microseconds, Fewer Is Better TensorFlow Lite 2020-08-23 Model: Mobilenet Quant Run 1 Run 2 Run 3 200K 400K 600K 800K 1000K SE +/- 2867.44, N = 3 SE +/- 231.29, N = 3 SE +/- 262.66, N = 3 947522 944925 944543
oneDNN Harness: IP Batch All - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 1.5 Harness: IP Batch All - Data Type: f32 - Engine: CPU Run 1 Run 2 Run 3 70 140 210 280 350 SE +/- 0.82, N = 3 SE +/- 0.58, N = 3 SE +/- 1.46, N = 3 307.09 304.02 311.40 MIN: 301.63 MIN: 300.13 MIN: 302.3 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
Darmstadt Automotive Parallel Heterogeneous Suite Backend: OpenMP - Kernel: Euclidean Cluster OpenBenchmarking.org Test Cases Per Minute, More Is Better Darmstadt Automotive Parallel Heterogeneous Suite Backend: OpenMP - Kernel: Euclidean Cluster Run 1 Run 2 Run 3 90 180 270 360 450 SE +/- 0.43, N = 3 SE +/- 0.77, N = 3 SE +/- 0.03, N = 3 436.88 436.58 435.60 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp
Rodinia Test: OpenMP Streamcluster OpenBenchmarking.org Seconds, Fewer Is Better Rodinia 3.1 Test: OpenMP Streamcluster Run 1 Run 2 Run 3 11 22 33 44 55 SE +/- 0.02, N = 3 SE +/- 0.03, N = 3 SE +/- 0.05, N = 3 47.08 46.79 47.02 1. (CXX) g++ options: -O2 -lOpenCL
Darmstadt Automotive Parallel Heterogeneous Suite Backend: OpenMP - Kernel: NDT Mapping OpenBenchmarking.org Test Cases Per Minute, More Is Better Darmstadt Automotive Parallel Heterogeneous Suite Backend: OpenMP - Kernel: NDT Mapping Run 1 Run 2 Run 3 90 180 270 360 450 SE +/- 0.20, N = 3 SE +/- 0.32, N = 3 SE +/- 0.72, N = 3 427.68 427.55 427.87 1. (CXX) g++ options: -O3 -std=c++11 -fopenmp
oneDNN Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 1.5 Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPU Run 1 Run 2 Run 3 200 400 600 800 1000 SE +/- 3.35, N = 3 SE +/- 7.02, N = 3 SE +/- 4.82, N = 3 810.74 811.33 824.36 MIN: 803.3 MIN: 798.93 MIN: 810.25 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
ASTC Encoder Preset: Medium OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.0 Preset: Medium Run 1 Run 2 Run 3 7 14 21 28 35 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 28.63 28.65 28.62 1. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
oneDNN Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 1.5 Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPU Run 1 Run 2 Run 3 3 6 9 12 15 SE +/- 0.04, N = 3 SE +/- 0.02, N = 3 SE +/- 0.12, N = 15 10.77 10.25 11.07 MIN: 10.23 MIN: 9.98 MIN: 10.02 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
ECP-CANDLE Benchmark: P1B2 OpenBenchmarking.org Seconds, Fewer Is Better ECP-CANDLE 0.3 Benchmark: P1B2 Run 1 Run 2 Run 3 20 40 60 80 100 74.84 72.77 74.82
oneDNN Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 1.5 Harness: Deconvolution Batch deconv_1d - Data Type: f32 - Engine: CPU Run 1 Run 2 Run 3 7 14 21 28 35 SE +/- 0.05, N = 3 SE +/- 0.03, N = 3 SE +/- 0.11, N = 3 27.86 27.76 27.78 MIN: 27.51 MIN: 27.42 MIN: 27.39 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
libavif avifenc Encoder Speed: 8 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.7.3 Encoder Speed: 8 Run 1 Run 2 Run 3 4 8 12 16 20 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 16.64 16.64 16.65 1. (CXX) g++ options: -O3 -fPIC
ASTC Encoder Preset: Fast OpenBenchmarking.org Seconds, Fewer Is Better ASTC Encoder 2.0 Preset: Fast Run 1 Run 2 Run 3 3 6 9 12 15 SE +/- 0.01, N = 3 SE +/- 0.00, N = 3 SE +/- 0.00, N = 3 10.49 10.49 10.50 1. (CXX) g++ options: -std=c++14 -fvisibility=hidden -O3 -flto -mfpmath=sse -mavx2 -mpopcnt -lpthread
oneDNN Harness: IP Batch 1D - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 1.5 Harness: IP Batch 1D - Data Type: f32 - Engine: CPU Run 1 Run 2 Run 3 6 12 18 24 30 SE +/- 0.37, N = 3 SE +/- 0.06, N = 3 SE +/- 0.09, N = 3 23.30 22.90 22.78 MIN: 21.98 MIN: 22.05 MIN: 22.01 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
libavif avifenc Encoder Speed: 10 OpenBenchmarking.org Seconds, Fewer Is Better libavif avifenc 0.7.3 Encoder Speed: 10 Run 1 Run 2 Run 3 4 8 12 16 20 SE +/- 0.01, N = 3 SE +/- 0.02, N = 3 SE +/- 0.01, N = 3 14.73 14.74 14.75 1. (CXX) g++ options: -O3 -fPIC
oneDNN Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 1.5 Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPU Run 1 Run 2 Run 3 8 16 24 32 40 SE +/- 0.14, N = 3 SE +/- 0.13, N = 3 SE +/- 0.17, N = 3 34.93 33.81 35.10 MIN: 34.35 MIN: 33.41 MIN: 34.36 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
oneDNN Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPU OpenBenchmarking.org ms, Fewer Is Better oneDNN 1.5 Harness: Deconvolution Batch deconv_3d - Data Type: f32 - Engine: CPU Run 1 Run 2 Run 3 9 18 27 36 45 SE +/- 0.11, N = 3 SE +/- 0.01, N = 3 SE +/- 0.04, N = 3 37.94 37.73 37.84 MIN: 37.65 MIN: 37.58 MIN: 37.62 1. (CXX) g++ options: -O3 -march=native -std=c++11 -fopenmp -msse4.1 -fPIC -pie -lpthread -ldl
Phoronix Test Suite v10.8.4