dsdfds
Intel Core i9-14900K testing with a ASUS PRIME Z790-P WIFI (1662 BIOS) and ASUS Intel RPL-S 16GB on Ubuntu 24.04 via the Phoronix Test Suite.
HTML result view exported from: https://openbenchmarking.org/result/2410191-PTS-DSDFDS2287&sor&grt.
Apache Cassandra
Test: Writes
Build2
Time To Compile
BYTE Unix Benchmark
Computational Test: Pipe
BYTE Unix Benchmark
Computational Test: Dhrystone 2
BYTE Unix Benchmark
Computational Test: System Call
BYTE Unix Benchmark
Computational Test: Whetstone Double
LiteRT
Model: DeepLab V3
LiteRT
Model: SqueezeNet
LiteRT
Model: Inception V4
LiteRT
Model: NASNet Mobile
LiteRT
Model: Mobilenet Float
LiteRT
Model: Mobilenet Quant
LiteRT
Model: Inception ResNet V2
LiteRT
Model: Quantized COCO SSD MobileNet v1
Mobile Neural Network
Model: nasnet
Mobile Neural Network
Model: mobilenetV3
Mobile Neural Network
Model: squeezenetv1.1
Mobile Neural Network
Model: resnet-v2-50
Mobile Neural Network
Model: SqueezeNetV1.0
Mobile Neural Network
Model: MobileNetV2_224
Mobile Neural Network
Model: mobilenet-v1-1.0
Mobile Neural Network
Model: inception-v3
NAMD
Input: ATPase with 327,506 Atoms
NAMD
Input: STMV with 1,066,628 Atoms
oneDNN
Harness: IP Shapes 1D - Engine: CPU
oneDNN
Harness: IP Shapes 3D - Engine: CPU
oneDNN
Harness: Convolution Batch Shapes Auto - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_1d - Engine: CPU
oneDNN
Harness: Deconvolution Batch shapes_3d - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Training - Engine: CPU
oneDNN
Harness: Recurrent Neural Network Inference - Engine: CPU
XNNPACK
Model: FP32MobileNetV1
XNNPACK
Model: FP32MobileNetV2
XNNPACK
Model: FP32MobileNetV3Large
XNNPACK
Model: FP32MobileNetV3Small
XNNPACK
Model: FP16MobileNetV1
XNNPACK
Model: FP16MobileNetV2
XNNPACK
Model: FP16MobileNetV3Large
XNNPACK
Model: FP16MobileNetV3Small
XNNPACK
Model: QS8MobileNetV2
Phoronix Test Suite v10.8.5