core-i5-12400-april

Intel Core i5-12400 testing with a ASRock B660M-HDV (3.02 BIOS) and llvmpipe on Ubuntu 22.04 via the Phoronix Test Suite.

Compare your own system(s) to this result file with the Phoronix Test Suite by running the command: phoronix-test-suite benchmark 2204077-NE-COREI512423
Jump To Table - Results

View

Do Not Show Noisy Results
Do Not Show Results With Incomplete Data
Do Not Show Results With Little Change/Spread
List Notable Results
Show Result Confidence Charts
Allow Limiting Results To Certain Suite(s)

Statistics

Show Overall Harmonic Mean(s)
Show Overall Geometric Mean
Show Wins / Losses Counts (Pie Chart)
Normalize Results
Remove Outliers Before Calculating Averages

Graph Settings

Force Line Graphs Where Applicable
Convert To Scalar Where Applicable
Disable Color Branding
Prefer Vertical Bar Graphs

Multi-Way Comparison

Condense Multi-Option Tests Into Single Result Graphs

Table

Show Detailed System Result Table

Run Management

Highlight
Result
Toggle/Hide
Result
Result
Identifier
View Logs
Performance Per
Dollar
Date
Run
  Test
  Duration
A
April 06 2022
  1 Hour, 59 Minutes
Intel Core i5-12400
April 06 2022
  6 Minutes
B
April 06 2022
  5 Hours, 40 Minutes
C
April 07 2022
  5 Hours, 38 Minutes
Invert Behavior (Only Show Selected Data)
  3 Hours, 21 Minutes

Only show results where is faster than
Only show results matching title/arguments (delimit multiple options with a comma):
Do not show results matching title/arguments (delimit multiple options with a comma):


core-i5-12400-april ProcessorMotherboardChipsetMemoryDiskGraphicsAudioNetworkOSKernelDesktopDisplay ServerOpenGLVulkanCompilerFile-SystemScreen ResolutionAIntel Core i5-12400BCIntel Core i5-12400 @ 5.60GHz (6 Cores / 12 Threads)ASRock B660M-HDV (3.02 BIOS)Intel Device 7aa716GB512GB SabrentllvmpipeRealtek ALC897IntelUbuntu 22.045.15.0-18-generic (x86_64)GNOME Shell 41.3X Server 1.20.144.5 Mesa 21.2.2 (LLVM 12.0.1 256 bits)1.1.182GCC 11.2.0ext41920x10804.5 Mesa 22.0.1 (LLVM 13.0.1 256 bits)1.2.204OpenBenchmarking.orgKernel Details- Transparent Huge Pages: madviseCompiler Details- A: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-iOLsLC/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-iOLsLC/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - Intel Core i5-12400: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - B: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v - C: --build=x86_64-linux-gnu --disable-vtable-verify --disable-werror --enable-bootstrap --enable-cet --enable-checking=release --enable-clocale=gnu --enable-default-pie --enable-gnu-unique-object --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --enable-libphobos-checking=release --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-link-serialization=2 --enable-multiarch --enable-multilib --enable-nls --enable-objc-gc=auto --enable-offload-targets=nvptx-none=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-gBFGDP/gcc-11-11.2.0/debian/tmp-gcn/usr --enable-plugin --enable-shared --enable-threads=posix --host=x86_64-linux-gnu --program-prefix=x86_64-linux-gnu- --target=x86_64-linux-gnu --with-abi=m64 --with-arch-32=i686 --with-build-config=bootstrap-lto-lean --with-default-libstdcxx-abi=new --with-gcc-major-version-only --with-multilib-list=m32,m64,mx32 --with-target-system-zlib=auto --with-tune=generic --without-cuda-driver -v Processor Details- A: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x18 - Thermald 2.4.7- Intel Core i5-12400: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x18 - Thermald 2.4.9- B: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x18 - Thermald 2.4.9- C: Scaling Governor: intel_pstate powersave (EPP: balance_performance) - CPU Microcode: 0x18 - Thermald 2.4.9Java Details- A, B, C: OpenJDK Runtime Environment (build 11.0.14.1+1-Ubuntu-0ubuntu1)Python Details- A: Python 3.9.12- B: Python 3.10.4- C: Python 3.10.4Security Details- itlb_multihit: Not affected + l1tf: Not affected + mds: Not affected + meltdown: Not affected + spec_store_bypass: Mitigation of SSB disabled via prctl and seccomp + spectre_v1: Mitigation of usercopy/swapgs barriers and __user pointer sanitization + spectre_v2: Mitigation of Enhanced IBRS IBPB: conditional RSB filling + srbds: Not affected + tsx_async_abort: Not affected

core-i5-12400-april onnx: GPT-2 - CPU - Parallelonnx: GPT-2 - CPU - Standardonnx: yolov4 - CPU - Parallelonnx: yolov4 - CPU - Standardonnx: bertsquad-12 - CPU - Parallelonnx: bertsquad-12 - CPU - Standardonnx: fcn-resnet101-11 - CPU - Parallelonnx: fcn-resnet101-11 - CPU - Standardonnx: ArcFace ResNet-100 - CPU - Parallelonnx: ArcFace ResNet-100 - CPU - Standardonnx: super-resolution-10 - CPU - Parallelonnx: super-resolution-10 - CPU - Standardonednn: IP Shapes 1D - f32 - CPUonednn: IP Shapes 3D - f32 - CPUonednn: IP Shapes 1D - u8s8f32 - CPUonednn: IP Shapes 3D - u8s8f32 - CPUonednn: Convolution Batch Shapes Auto - f32 - CPUonednn: Deconvolution Batch shapes_1d - f32 - CPUonednn: Deconvolution Batch shapes_3d - f32 - CPUonednn: Convolution Batch Shapes Auto - u8s8f32 - CPUonednn: Deconvolution Batch shapes_1d - u8s8f32 - CPUonednn: Deconvolution Batch shapes_3d - u8s8f32 - CPUonednn: Recurrent Neural Network Training - f32 - CPUonednn: Recurrent Neural Network Inference - f32 - CPUonednn: Recurrent Neural Network Training - u8s8f32 - CPUonednn: Recurrent Neural Network Inference - u8s8f32 - CPUonednn: Matrix Multiply Batch Shapes Transformer - f32 - CPUonednn: Recurrent Neural Network Training - bf16bf16bf16 - CPUonednn: Recurrent Neural Network Inference - bf16bf16bf16 - CPUonednn: Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPUjava-jmh: Throughputbuild-mplayer: Time To Compilecompress-pbzip2: FreeBSD-13.0-RELEASE-amd64-memstick.img Compressionaom-av1: Speed 0 Two-Pass - Bosphorus 4Kaom-av1: Speed 4 Two-Pass - Bosphorus 4Kaom-av1: Speed 6 Realtime - Bosphorus 4Kaom-av1: Speed 6 Two-Pass - Bosphorus 4Kaom-av1: Speed 8 Realtime - Bosphorus 4Kaom-av1: Speed 9 Realtime - Bosphorus 4Kaom-av1: Speed 10 Realtime - Bosphorus 4Kaom-av1: Speed 0 Two-Pass - Bosphorus 1080paom-av1: Speed 4 Two-Pass - Bosphorus 1080paom-av1: Speed 6 Realtime - Bosphorus 1080paom-av1: Speed 6 Two-Pass - Bosphorus 1080paom-av1: Speed 8 Realtime - Bosphorus 1080paom-av1: Speed 9 Realtime - Bosphorus 1080paom-av1: Speed 10 Realtime - Bosphorus 1080pdav1d: Chimera 1080pdav1d: Summer Nature 4Kdav1d: Summer Nature 1080pdav1d: Chimera 1080p 10-bitavifenc: 0avifenc: 2avifenc: 6avifenc: 6, Losslessavifenc: 10, Losslessospray: particle_volume/ao/real_timeospray: particle_volume/scivis/real_timeospray: particle_volume/pathtracer/real_timeospray: gravity_spheres_volume/dim_512/ao/real_timeospray: gravity_spheres_volume/dim_512/scivis/real_timeospray: gravity_spheres_volume/dim_512/pathtracer/real_timeospray-studio: 1 - 1080p - 1 - Path Tracerospray-studio: 2 - 1080p - 1 - Path Tracerospray-studio: 3 - 1080p - 1 - Path Tracerospray-studio: 1 - 1080p - 16 - Path Tracerospray-studio: 1 - 1080p - 32 - Path Tracerospray-studio: 2 - 1080p - 16 - Path Tracerospray-studio: 2 - 1080p - 32 - Path Tracerospray-studio: 3 - 1080p - 16 - Path Tracerospray-studio: 3 - 1080p - 32 - Path Tracerbuild-wasmer: Time To Compilerocksdb: Rand Readrocksdb: Update Randrocksdb: Read While Writingrocksdb: Read Rand Write RandAIntel Core i5-12400BC57456878277292427439444285386928452822113.72464.373728.326827.917247.6937162.85346.670150.702446.398317.215229463.334134.534757.428488.271.826836805.224467.553.701120972409216.50242.43210.9550.134.1713.91842.4763.1661.380.349.5811.6724.11107.06124.6133.21561.82172.71710.02481.7512.968812.3203163.5911.838041.799132.4722233983471418956352115145572991166896856813692985.4854569720667978014671661497554189.7485.1612.20714.9186.2607672053013094434585644944884324729403.740529.901060.9578032.1089414.33189.063887.6989013.75651.426411.981564095.962264.904102.342264.982.689664099.422271.691.0471623002668637.3639.1139.8900.144.7015.088.9945.6568.1472.380.3710.3912.3626.97116.51149.25150.33612.15191.89795.53536.82191.07585.96812.21314.9756.24714.212713.8893182.1122.000381.965772.7314631143165374050151102548508971041246205412236677.9144959469174288015587931631656604971943013094384585744944884325729633.739629.840820.9593362.0971414.32558.901157.8172613.80031.427381.993654086.372264.464101.912261.842.690254098.32270.921.0534122882301161.5639.06210.0310.144.7015.028.9245.6868.3972.410.3710.3712.4126.95114.84142.90149.66612.41192.02796.46536.42191.47685.93912.19314.9996.25714.156113.8804182.0371.985501.952202.7339931083163373550418102588509541040446205412224577.8174960686574313515987281621120OpenBenchmarking.org

ONNX Runtime

ONNX Runtime is developed by Microsoft and partners as a open-source, cross-platform, high performance machine learning inferencing and training accelerator. This test profile runs the ONNX Runtime with various models available from the ONNX Zoo. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: ParallelCBA13002600390052006500SE +/- 8.66, N = 3SE +/- 5.13, N = 36049607657451. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: GPT-2 - Device: CPU - Executor: StandardCBA15003000450060007500SE +/- 2.52, N = 3SE +/- 6.71, N = 37194720568781. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: ParallelCBA70140210280350SE +/- 1.36, N = 3SE +/- 0.88, N = 33013012771. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: yolov4 - Device: CPU - Executor: StandardCBA70140210280350SE +/- 0.17, N = 3SE +/- 0.17, N = 33093092921. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: ParallelCBA100200300400500SE +/- 3.68, N = 12SE +/- 4.83, N = 124384434271. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: bertsquad-12 - Device: CPU - Executor: StandardCBA100200300400500SE +/- 0.17, N = 3SE +/- 0.17, N = 34584584391. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: ParallelCBA1326395265SE +/- 0.17, N = 3SE +/- 0.44, N = 35756441. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: fcn-resnet101-11 - Device: CPU - Executor: StandardCBA1020304050SE +/- 0.00, N = 3SE +/- 0.00, N = 34444421. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: ParallelCBA2004006008001000SE +/- 2.96, N = 3SE +/- 3.33, N = 39449448531. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: ArcFace ResNet-100 - Device: CPU - Executor: StandardCBA2004006008001000SE +/- 0.17, N = 3SE +/- 0.00, N = 38848848691. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: ParallelCBA7001400210028003500SE +/- 11.90, N = 3SE +/- 4.73, N = 33257324728451. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

OpenBenchmarking.orgInferences Per Minute, More Is BetterONNX Runtime 1.11Model: super-resolution-10 - Device: CPU - Executor: StandardCBA6001200180024003000SE +/- 0.67, N = 3SE +/- 12.08, N = 32963294028221. (CXX) g++ options: -ffunction-sections -fdata-sections -march=native -mtune=native -O3 -flto -fno-fat-lto-objects -ldl -lrt

oneDNN

This is a test of the Intel oneDNN as an Intel-optimized library for Deep Neural Networks and making use of its built-in benchdnn functionality. The result is the total perf time reported. Intel oneDNN was formerly known as DNNL (Deep Neural Network Library) and MKL-DNN before being rebranded as part of Intel oneAPI. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: f32 - Engine: CPUCBA306090120150SE +/- 0.00589, N = 3SE +/- 0.00457, N = 33.739623.74052113.72400MIN: 51.181. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: f32 - Engine: CPUCBA1428425670SE +/- 0.00524, N = 3SE +/- 0.01269, N = 39.840829.9010664.37370MIN: 9.41MIN: 9.49MIN: 14.761. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 1D - Data Type: u8s8f32 - Engine: CPUCBA714212835SE +/- 0.001531, N = 3SE +/- 0.001376, N = 30.9593360.95780328.326800MIN: 0.921. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: IP Shapes 3D - Data Type: u8s8f32 - Engine: CPUCBA714212835SE +/- 0.00196, N = 3SE +/- 0.00109, N = 32.097142.1089427.91720MIN: 2.081. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

Harness: IP Shapes 1D - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

C: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

Harness: IP Shapes 3D - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

C: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: f32 - Engine: CPUCBA1122334455SE +/- 0.01, N = 3SE +/- 0.01, N = 314.3314.3347.69MIN: 14.16MIN: 14.19MIN: 17.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: f32 - Engine: CPUCBA4080120160200SE +/- 0.13280, N = 15SE +/- 0.17309, N = 158.901159.06388162.85300MIN: 6.381. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: f32 - Engine: CPUCBA1122334455SE +/- 0.10400, N = 3SE +/- 0.00768, N = 37.817267.6989046.67010MIN: 7.6MIN: 7.57MIN: 16.151. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Convolution Batch Shapes Auto - Data Type: u8s8f32 - Engine: CPUCBA1122334455SE +/- 0.03, N = 3SE +/- 0.02, N = 313.8013.7650.70MIN: 13.2MIN: 13.16MIN: 19.831. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_1d - Data Type: u8s8f32 - Engine: CPUCBA1122334455SE +/- 0.00246, N = 3SE +/- 0.00232, N = 31.427381.4264146.39830MIN: 1.41. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Deconvolution Batch shapes_3d - Data Type: u8s8f32 - Engine: CPUCBA48121620SE +/- 0.01869, N = 3SE +/- 0.00178, N = 31.993651.9815617.21520MIN: 1.971. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: f32 - Engine: CPUCBA6K12K18K24K30KSE +/- 12.29, N = 3SE +/- 3.35, N = 34086.374095.9629463.30MIN: 4011.27MIN: 4045.08MIN: 14417.61. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: f32 - Engine: CPUCBA7K14K21K28K35KSE +/- 3.21, N = 3SE +/- 0.80, N = 32264.462264.9034134.50MIN: 18540.81. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: u8s8f32 - Engine: CPUCBA7K14K21K28K35KSE +/- 3.15, N = 3SE +/- 1.65, N = 34101.914102.3434757.40MIN: 18514.71. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

Harness: Convolution Batch Shapes Auto - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

C: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

Harness: Deconvolution Batch shapes_1d - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

C: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

Harness: Deconvolution Batch shapes_3d - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

C: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: u8s8f32 - Engine: CPUCBA6K12K18K24K30KSE +/- 5.98, N = 3SE +/- 1.35, N = 32261.842264.9828488.20MIN: 124191. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: f32 - Engine: CPUCBA1632486480SE +/- 0.00320, N = 3SE +/- 0.00256, N = 32.690252.6896671.82680MIN: 3.121. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Training - Data Type: bf16bf16bf16 - Engine: CPUCBA8K16K24K32K40KSE +/- 6.25, N = 3SE +/- 7.00, N = 34098.304099.4236805.20MIN: 165671. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Recurrent Neural Network Inference - Data Type: bf16bf16bf16 - Engine: CPUCBA5K10K15K20K25KSE +/- 3.05, N = 3SE +/- 4.51, N = 32270.922271.6924467.50MIN: 9924.51. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

OpenBenchmarking.orgms, Fewer Is BetteroneDNN 2.6Harness: Matrix Multiply Batch Shapes Transformer - Data Type: u8s8f32 - Engine: CPUCBA1224364860SE +/- 0.00112, N = 3SE +/- 0.00153, N = 31.053411.0471653.70110MIN: 1.031. (CXX) g++ options: -O3 -march=native -fopenmp -msse4.1 -fPIC -std=c++11 -pie -ldl

Harness: Matrix Multiply Batch Shapes Transformer - Data Type: bf16bf16bf16 - Engine: CPU

A: The test run did not produce a result.

B: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

C: The test run did not produce a result. The test run did not produce a result. The test run did not produce a result.

Java JMH

This very basic test profile runs the stock benchmark of the Java JMH benchmark via Maven. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOps/s, More Is BetterJava JMHThroughputCBA5000M10000M15000M20000M25000M22882301161.5623002668637.3620972409216.50

Timed MPlayer Compilation

This test times how long it takes to build the MPlayer open-source media player program. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed MPlayer Compilation 1.5Time To CompileCBA1020304050SE +/- 0.08, N = 3SE +/- 0.09, N = 339.0639.1142.43

Parallel BZIP2 Compression

This test measures the time needed to compress a file (FreeBSD-13.0-RELEASE-amd64-memstick.img) using Parallel BZIP2 compression. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterParallel BZIP2 Compression 1.1.13FreeBSD-13.0-RELEASE-amd64-memstick.img CompressionCBA3691215SE +/- 0.023, N = 3SE +/- 0.106, N = 310.0319.89010.9551. (CXX) g++ options: -O2 -pthread -lbz2 -lpthread

AOM AV1

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 4KCBA0.03150.0630.09450.1260.1575SE +/- 0.00, N = 3SE +/- 0.00, N = 30.140.140.131. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 4KCBA1.05752.1153.17254.235.2875SE +/- 0.00, N = 3SE +/- 0.00, N = 34.704.704.171. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Realtime - Input: Bosphorus 4KCBA48121620SE +/- 0.07, N = 3SE +/- 0.13, N = 1515.0215.0813.911. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 4KCBA3691215SE +/- 0.07, N = 3SE +/- 0.00, N = 38.928.998.001. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 8 Realtime - Input: Bosphorus 4KCBA1020304050SE +/- 0.07, N = 3SE +/- 0.02, N = 345.6845.6542.471. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 9 Realtime - Input: Bosphorus 4KCBA1530456075SE +/- 0.04, N = 3SE +/- 0.12, N = 368.3968.1463.161. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 10 Realtime - Input: Bosphorus 4KCBA1632486480SE +/- 0.05, N = 3SE +/- 0.14, N = 372.4172.3861.381. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 0 Two-Pass - Input: Bosphorus 1080pCBA0.08330.16660.24990.33320.4165SE +/- 0.00, N = 3SE +/- 0.00, N = 30.370.370.341. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 4 Two-Pass - Input: Bosphorus 1080pCBA3691215SE +/- 0.04, N = 3SE +/- 0.01, N = 310.3710.399.581. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Realtime - Input: Bosphorus 1080pCBA3691215SE +/- 0.10, N = 8SE +/- 0.04, N = 312.4112.3611.671. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 6 Two-Pass - Input: Bosphorus 1080pCBA612182430SE +/- 0.00, N = 3SE +/- 0.01, N = 326.9526.9724.111. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 8 Realtime - Input: Bosphorus 1080pCBA306090120150SE +/- 0.29, N = 3SE +/- 0.41, N = 3114.84116.51107.061. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 9 Realtime - Input: Bosphorus 1080pCBA306090120150SE +/- 0.31, N = 3SE +/- 2.69, N = 15142.90149.25124.601. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

OpenBenchmarking.orgFrames Per Second, More Is BetterAOM AV1 3.3Encoder Mode: Speed 10 Realtime - Input: Bosphorus 1080pCBA306090120150SE +/- 1.27, N = 3SE +/- 0.90, N = 3149.66150.33133.211. (CXX) g++ options: -O3 -std=c++11 -U_FORTIFY_SOURCE -lm

dav1d

Dav1d is an open-source, speedy AV1 video decoder. This test profile times how long it takes to decode sample AV1 video content. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Chimera 1080pCBA130260390520650SE +/- 1.26, N = 3SE +/- 1.17, N = 3612.41612.15561.821. (CC) gcc options: -pthread -lm

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Summer Nature 4KCBA4080120160200SE +/- 0.29, N = 3SE +/- 0.26, N = 3192.02191.89172.711. (CC) gcc options: -pthread -lm

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Summer Nature 1080pCBA2004006008001000SE +/- 3.38, N = 3SE +/- 1.98, N = 3796.46795.53710.021. (CC) gcc options: -pthread -lm

OpenBenchmarking.orgFPS, More Is Betterdav1d 1.0Video Input: Chimera 1080p 10-bitCBA120240360480600SE +/- 0.74, N = 3SE +/- 0.72, N = 3536.42536.82481.751. (CC) gcc options: -pthread -lm

libavif avifenc

This is a test of the AOMedia libavif library testing the encoding of a JPEG image to AV1 Image Format (AVIF). Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 0CBIntel Core i5-124004080120160200SE +/- 0.20, N = 3SE +/- 0.09, N = 3191.48191.08189.741. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 2CBIntel Core i5-1240020406080100SE +/- 0.30, N = 3SE +/- 0.15, N = 385.9485.9785.161. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6CBIntel Core i5-124003691215SE +/- 0.02, N = 3SE +/- 0.03, N = 312.1912.2112.211. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 6, LosslessCBIntel Core i5-1240048121620SE +/- 0.03, N = 3SE +/- 0.03, N = 315.0014.9814.921. (CXX) g++ options: -O3 -fPIC -lm

OpenBenchmarking.orgSeconds, Fewer Is Betterlibavif avifenc 0.10Encoder Speed: 10, LosslessCBIntel Core i5-12400246810SE +/- 0.029, N = 3SE +/- 0.026, N = 36.2576.2476.2001. (CXX) g++ options: -O3 -fPIC -lm

OSPray

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/ao/real_timeCBA48121620SE +/- 0.04, N = 3SE +/- 0.03, N = 314.1614.2112.97

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/scivis/real_timeCBA48121620SE +/- 0.02, N = 3SE +/- 0.01, N = 313.8813.8912.32

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: particle_volume/pathtracer/real_timeCBA4080120160200SE +/- 0.10, N = 3SE +/- 0.02, N = 3182.04182.11163.59

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/ao/real_timeCBA0.45010.90021.35031.80042.2505SE +/- 0.00129, N = 3SE +/- 0.00584, N = 31.985502.000381.83804

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/scivis/real_timeCBA0.44230.88461.32691.76922.2115SE +/- 0.00260, N = 3SE +/- 0.00094, N = 31.952201.965771.79913

OpenBenchmarking.orgItems Per Second, More Is BetterOSPray 2.9Benchmark: gravity_spheres_volume/dim_512/pathtracer/real_timeCBA0.61511.23021.84532.46043.0755SE +/- 0.00158, N = 3SE +/- 0.00279, N = 32.733992.731462.47222

OSPray Studio

Intel OSPray Studio is an open-source, interactive visualization and ray-tracing software package. OSPray Studio makes use of Intel OSPray, a portable ray-tracing engine for high-performance, high-fidelity visualizations. OSPray builds off Intel's Embree and Intel SPMD Program Compiler (ISPC) components as part of the oneAPI rendering toolkit. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerCBA7001400210028003500SE +/- 10.02, N = 3SE +/- 7.42, N = 33108311433981. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerCBA7001400210028003500SE +/- 0.33, N = 3SE +/- 0.58, N = 33163316534711. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 1 - Renderer: Path TracerCBA9001800270036004500SE +/- 2.85, N = 3SE +/- 3.53, N = 33735374041891. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerCBA12K24K36K48K60KSE +/- 272.17, N = 3SE +/- 21.38, N = 35041850151563521. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 1 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerCBA20K40K60K80K100KSE +/- 84.91, N = 3SE +/- 146.65, N = 31025881025481151451. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerCBA12K24K36K48K60KSE +/- 27.83, N = 3SE +/- 44.19, N = 35095450897572991. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 2 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerCBA20K40K60K80K100KSE +/- 63.66, N = 3SE +/- 167.02, N = 31040441041241166891. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 16 - Renderer: Path TracerCBA15K30K45K60K75KSE +/- 140.67, N = 3SE +/- 147.67, N = 36205462054685681. (CXX) g++ options: -O3 -lm -ldl

OpenBenchmarking.orgms, Fewer Is BetterOSPray Studio 0.10Camera: 3 - Resolution: 1080p - Samples Per Pixel: 32 - Renderer: Path TracerCBA30K60K90K120K150KSE +/- 103.73, N = 3SE +/- 170.37, N = 31222451223661369291. (CXX) g++ options: -O3 -lm -ldl

Timed Wasmer Compilation

This test times how long it takes to compile Wasmer. Wasmer is written in the Rust programming language and is a WebAssembly runtime implementation that supports WASI and EmScripten. This test profile builds Wasmer with the Cranelift and Singlepast compiler features enabled. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgSeconds, Fewer Is BetterTimed Wasmer Compilation 2.2Time To CompileCBA20406080100SE +/- 0.23, N = 3SE +/- 0.11, N = 377.8277.9185.491. (CC) gcc options: -m64 -ldl -lxkbcommon -lgcc_s -lutil -lrt -lpthread -lm -lc -pie -nodefaultlibs

Facebook RocksDB

This is a benchmark of Facebook's RocksDB as an embeddable persistent key-value store for fast storage based on Google's LevelDB. Learn more via the OpenBenchmarking.org test page.

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Random ReadCBA11M22M33M44M55MSE +/- 309988.44, N = 3SE +/- 269279.25, N = 34960686549594691456972061. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Update RandomCBA160K320K480K640K800KSE +/- 847.59, N = 3SE +/- 1507.03, N = 37431357428806797801. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read While WritingCBA300K600K900K1200K1500KSE +/- 12505.02, N = 3SE +/- 7170.07, N = 31598728155879314671661. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

OpenBenchmarking.orgOp/s, More Is BetterFacebook RocksDB 7.0.1Test: Read Random Write RandomCBA300K600K900K1200K1500KSE +/- 1481.37, N = 3SE +/- 8702.90, N = 31621120163165614975541. (CXX) g++ options: -O3 -march=native -pthread -fno-builtin-memcmp -fno-rtti -lpthread

76 Results Shown

ONNX Runtime:
  GPT-2 - CPU - Parallel
  GPT-2 - CPU - Standard
  yolov4 - CPU - Parallel
  yolov4 - CPU - Standard
  bertsquad-12 - CPU - Parallel
  bertsquad-12 - CPU - Standard
  fcn-resnet101-11 - CPU - Parallel
  fcn-resnet101-11 - CPU - Standard
  ArcFace ResNet-100 - CPU - Parallel
  ArcFace ResNet-100 - CPU - Standard
  super-resolution-10 - CPU - Parallel
  super-resolution-10 - CPU - Standard
oneDNN:
  IP Shapes 1D - f32 - CPU
  IP Shapes 3D - f32 - CPU
  IP Shapes 1D - u8s8f32 - CPU
  IP Shapes 3D - u8s8f32 - CPU
  Convolution Batch Shapes Auto - f32 - CPU
  Deconvolution Batch shapes_1d - f32 - CPU
  Deconvolution Batch shapes_3d - f32 - CPU
  Convolution Batch Shapes Auto - u8s8f32 - CPU
  Deconvolution Batch shapes_1d - u8s8f32 - CPU
  Deconvolution Batch shapes_3d - u8s8f32 - CPU
  Recurrent Neural Network Training - f32 - CPU
  Recurrent Neural Network Inference - f32 - CPU
  Recurrent Neural Network Training - u8s8f32 - CPU
  Recurrent Neural Network Inference - u8s8f32 - CPU
  Matrix Multiply Batch Shapes Transformer - f32 - CPU
  Recurrent Neural Network Training - bf16bf16bf16 - CPU
  Recurrent Neural Network Inference - bf16bf16bf16 - CPU
  Matrix Multiply Batch Shapes Transformer - u8s8f32 - CPU
Java JMH
Timed MPlayer Compilation
Parallel BZIP2 Compression
AOM AV1:
  Speed 0 Two-Pass - Bosphorus 4K
  Speed 4 Two-Pass - Bosphorus 4K
  Speed 6 Realtime - Bosphorus 4K
  Speed 6 Two-Pass - Bosphorus 4K
  Speed 8 Realtime - Bosphorus 4K
  Speed 9 Realtime - Bosphorus 4K
  Speed 10 Realtime - Bosphorus 4K
  Speed 0 Two-Pass - Bosphorus 1080p
  Speed 4 Two-Pass - Bosphorus 1080p
  Speed 6 Realtime - Bosphorus 1080p
  Speed 6 Two-Pass - Bosphorus 1080p
  Speed 8 Realtime - Bosphorus 1080p
  Speed 9 Realtime - Bosphorus 1080p
  Speed 10 Realtime - Bosphorus 1080p
dav1d:
  Chimera 1080p
  Summer Nature 4K
  Summer Nature 1080p
  Chimera 1080p 10-bit
libavif avifenc:
  0
  2
  6
  6, Lossless
  10, Lossless
OSPray:
  particle_volume/ao/real_time
  particle_volume/scivis/real_time
  particle_volume/pathtracer/real_time
  gravity_spheres_volume/dim_512/ao/real_time
  gravity_spheres_volume/dim_512/scivis/real_time
  gravity_spheres_volume/dim_512/pathtracer/real_time
OSPray Studio:
  1 - 1080p - 1 - Path Tracer
  2 - 1080p - 1 - Path Tracer
  3 - 1080p - 1 - Path Tracer
  1 - 1080p - 16 - Path Tracer
  1 - 1080p - 32 - Path Tracer
  2 - 1080p - 16 - Path Tracer
  2 - 1080p - 32 - Path Tracer
  3 - 1080p - 16 - Path Tracer
  3 - 1080p - 32 - Path Tracer
Timed Wasmer Compilation
Facebook RocksDB:
  Rand Read
  Update Rand
  Read While Writing
  Read Rand Write Rand