Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
frameworkBackendDeviceSingleHalfQuant.Results
TensorFlow Lite
CPUIntel Core Ultra 9 185H197120371382https://browser.geekbench.com/ai/v1/259291
ONNX
CPUIntel Core Ultra 9 185H21386435634
OpenVINO
CPUIntel Core Ultra 9 185H



OpenVINO
GPUIntel(R) Arc(TM) Graphics (iGPU)



Workload (TF Lite)AccuracyScoreWorkload (ONNX)AccuracyScoreWorkload (OpenVINO)AccuracyScore
Image Classification (SP)100%1510
280.9 IPS
Image Classification (SP)100%1106
205.7 IPS

 


Image Classification (HP)100%1428
265.6 IPS
Image Classification (HP)100%133
24.6 IPS

 


Image Classification (Q)99%993
185.1 IPS
Image Classification (Q)97%4177
779.5 IPS

 


Image Segmentation (SP)100%2253
36.5 IPS
 Image Segmentation (HPSP)100%11412243
3618.4 5 IPS

 


Image Segmentation (HP)100%2243
36.4 IPS
Image Segmentation (HP)100%209
3.38 IPS

 


Image Segmentation (Q)98%1035
16.8 IPS
Image Segmentation (Q)99%2658
43.2 IPS

 


Pose Estimation (SP)100%2357
2.75 IPS
Pose Estimation (SP)100%3668
4.28 IPS

 


Pose Estimation (HP)100%2309
2.69 IPS
Pose Estimation (HP)100%3007
3.51 IPS

 


Pose Estimation (Q)96%3224
3.78 IPS
Pose Estimation (Q)94%20021
23.5 IPS

 


 Object Detection (SP)100%1654
131.2 IPS
Object Detection (SP)100%1544
122.5 IPS

 


Object Detection (HP)100%1648
130.7 IPS
Object Detection (HP)100%269
21.3 IPS

 


Object Detection (Q)85%1024
82.4 IPS
Object Detection (Q)86%4605
370.1 IPS

 


Face Detection (SP)100%3071
36.5 IPS
 Face Detection (HPSP)100%28073060
3633.4 IPS

 


Face Detection (QHP)97%100%30602278
2736.2 4 IPS
 Depth Estimation (SPFace Detection (HP)100%3142317
173.9 73 IPS

 


Depth Estimation Face Detection (HPQ)99%97%22782507
1927.3 2 IPS
 Depth Estimation Face Detection (Q)63%97%124361964
18148.4 3 IPS

 


Style Transfer Depth Estimation (SP)100%23172892
317.72 9 IPS
 Style Transfer (HPDepth Estimation (SP)100%42202928
332.76 5 IPS

 


Style Transfer Depth Estimation (QHP)98%99%25075650
719.29 3 IPS
 Image Super-Resolution (SPDepth Estimation (HP)100%99%11211494
558.2 64 IPS

 


Image Super-Resolution (HPDepth Estimation (Q)100%63%19641911
7018.6 4 IPS
 Image Super-Resolution Depth Estimation (Q)97%78%138481463
54110.2 5 IPS

 


Text Classification Style Transfer (SP)100%28921229
13.64 KIPS72 IPS
Style Transfer (SP Text Classification (HP)100%91101105
111.47 KIPS7 IPS

 


Text Classification Style Transfer (QHP)92%100%2928390
5243.3 76 IPS
 Machine Translation (SPStyle Transfer (HP)100%74981771
309.5 64 IPS

 


Machine Translation Style Transfer (HPQ)100%98%56502135
367.8 29 IPS
 Machine Translation Style Transfer (Q)58%98%17976520
1223.2 IPS

b

Code Block
root@server6:/mnt/GeekbenchAI-1.3.0-Linux# ./banff --ai-list
Geekbench AI 1.3.0 : https://www.geekbench.com/ai/

Geekbench AI requires an active internet connection and automatically uploads
benchmark results to the Geekbench Browser.

Framework     | Backend       | Device
 1 TensorFlow Lite |  1 CPU        |  0 Intel Core Ultra 9 185H
 3 ONNX       |  1 CPU        |  0 Intel Core Ultra 9 185H
 4 OpenVINO   |  1 CPU        |  0 Intel(R) Core(TM) Ultra 9 185H

Install

Code Block
wget https://github.com/intel/linux-npu-driver/releases/download/v1.17.0/intel-driver-compiler-npu_1.17.0.20250508-14912879441_ubuntu24.04_amd64.deb
wget https://github.com/intel/linux-npu-driver/releases/download/v1.17.0/intel-fw-npu_1.17.0.20250508-14912879441_ubuntu24.04_amd64.deb
wget https://github.com/intel/linux-npu-driver/releases/download/v1.17.0/intel-level-zero-npu_1.17.0.20250508-14912879441_ubuntu24.04_amd64.deb
dpkg --purge --force-remove-reinstreq intel-driver-compiler-npu intel-fw-npu intel-level-zero-npu
apt update
apt install libtbb12
dpkg -i *.deb
wget https://github.com/oneapi-src/level-zero/releases/download/v1.21.9/level-zero_1.21.9+u24.04_amd64.deb
dpkg -i level-zero*.deb

Code Block
# OpenVino
wget https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
sudo apt-key add GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
echo "deb https://apt.repos.intel.com/openvino/2025 ubuntu24 main" | sudo tee /etc/apt/sources.list.d/intel-openvino-2025.list
apt update
apt-cache search openvino
apt install openvino
python3 /usr/share/openvino/samples/python/hello_query_device/hello_query_device.py

#Geekbench
mkdir Geekbench
cd Geekbench
wget https://cdn.geekbench.com/GeekbenchAI-1.3.0-Linux.tar.gz
tar xvf GeekbenchAI-1.3.0-Linux.tar.gz
cd GeekbenchAI-1.3.0-Linux/

Check Available frameworks

Ubuntu 24.04

Code Block
root@server5:/storage/apps/Geekbench/GeekbenchAI-1.3.0-Linux# ./banff --ai-list
Geekbench AI 1.3.0 : https://www.geekbench.com/ai/

Geekbench AI requires an active internet connection and automatically uploads
benchmark results to the Geekbench Browser.

Framework          | Backend       | Device
 1 TensorFlow Lite |  1 CPU        |  0 Intel(R) N150
 3 ONNX            |  1 CPU        |  0 Intel(R) N150
 4 OpenVINO        |  1 CPU        |  0 Intel(R) N150

Ubuntu 25.04

Code Block
root@server5:/storage/apps/Geekbench/GeekbenchAI-1.3.0-Linux# ./banff --ai-list
Geekbench AI 1.3.0 : https://www.geekbench.com/ai/

Geekbench AI requires an active internet connection and automatically uploads
benchmark results to the Geekbench Browser.

Framework          | Backend       | Device
 1 TensorFlow Lite |  1 CPU        |  0 Intel(R) N150
 3 ONNX            |  1 CPU        |  0 Intel(R) N150
 4 OpenVINO        |  1 CPU        |  0 Intel(R) N150
 4 OpenVINO        |  2 GPU        |  1 Intel(R) Graphics (iGPU)

help

Code Block
root@server5:/storage/apps/Geekbench/GeekbenchAI-1.3.0-Linux# ./banff --help
Geekbench AI 1.3.0 : https://www.geekbench.com/ai/

Usage:

  ./banff [ options ]

Options:

  -h, --help                  print this message

AI Benchmark Options:

  --ai                        run the AI benchmark
  --ai-framework [ID]         use AI framework ID
  --ai-backend [ID]           use AI backend ID
  --ai-device [ID]            use AI device ID
  --ai-list                   list available AI settings

If no options are given, the default action is to run the inference benchmark.

example run


 


Image Super-Resolution (SP)100%1494
55.2 IPS
Image Super-Resolution (SP)100%1774
65.5 IPS

 


Image Super-Resolution (HP)100%1911
70.6 IPS
Image Super-Resolution (HP)100%1166
43.1 IPS

 


Image Super-Resolution (Q)97%1463
54.2 IPS
Image Super-Resolution (Q)99%3013
111.6 IPS

 


Text Classification (SP)100%1229
1.64 KIPS
Text Classification (SP)100%1105
1.48 KIPS

 


Text Classification (HP)100%1105
1.47 KIPS
Text Classification (HP)100%333
444.7 IPS

 


Text Classification (Q)92%390
524.3 IPS
Text Classification (Q)97%1083
1.45 KIPS

 


Machine Translation (SP)100%1771
30.5 IPS
Machine Translation (SP)100%1320
22.7 IPS

 


Machine Translation (HP)100%2135
36.8 IPS
Machine Translation (HP)100%530
9.14 IPS

 


Machine Translation (Q)58%520
12.2 IPS
Machine Translation (Q)65%3117
62.6 IPS



before some drivers added

Code Block
root@server6:/mnt/GeekbenchAI-1.3.0-Linux# ./banff --ai-list
Geekbench AI 1.3.0 : https://www.geekbench.com/ai/

Geekbench AI requires an active internet connection and automatically uploads
benchmark results to the Geekbench Browser.

Framework     | Backend       | Device
 1 TensorFlow Lite |  1 CPU        |  0 Intel Core Ultra 9 185H
 3 ONNX       |  1 CPU        |  0 Intel Core Ultra 9 185H
 4 OpenVINO   |  1 CPU        |  0 Intel(R) Core(TM) Ultra 9 185H


Install

Code Block
# OpenVino
wget https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
sudo apt-key add GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
echo "deb https://apt.repos.intel.com/openvino/2025 ubuntu24 main" | sudo tee /etc/apt/sources.list.d/intel-openvino-2025.list
apt update
apt-cache search openvino
apt install openvino
python3 /usr/share/openvino/samples/python/hello_query_device/hello_query_device.py
 
# NPU driver
wget https://github.com/intel/linux-npu-driver/releases/download/v1.17.0/intel-driver-compiler-npu_1.17.0.20250508-14912879441_ubuntu24.04_amd64.deb
wget https://github.com/intel/linux-npu-driver/releases/download/v1.17.0/intel-fw-npu_1.17.0.20250508-14912879441_ubuntu24.04_amd64.deb
wget https://github.com/intel/linux-npu-driver/releases/download/v1.17.0/intel-level-zero-npu_1.17.0.20250508-14912879441_ubuntu24.04_amd64.deb
dpkg --purge --force-remove-reinstreq intel-driver-compiler-npu intel-fw-npu intel-level-zero-npu
apt update
apt install libtbb12
dpkg -i *.deb
wget https://github.com/oneapi-src/level-zero/releases/download/v1.21.9/level-zero_1.21.9+u24.04_amd64.deb
dpkg -i level-zero*.deb


Code Block
#Geekbench
mkdir Geekbench
cd Geekbench
wget https://cdn.geekbench.com/GeekbenchAI-1.3.0-Linux.tar.gz
tar xvf GeekbenchAI-1.3.0-Linux.tar.gz
cd GeekbenchAI-1.3.0-Linux/


Check Available frameworks

Ubuntu 24.04

Code Block
root@server6:/mnt/GeekbenchAI-1.3.0-Linux# ./banff --ai-list
Geekbench AI 1.3.0 : https://www.geekbench.com/ai/

Geekbench AI requires an active internet connection and automatically uploads
benchmark results to the Geekbench Browser.

Framework     | Backend       | Device
 1 TensorFlow Lite |  1 CPU        |  0 Intel Core Ultra 9 185H
 3 ONNX       |  1 CPU        |  0 Intel Core Ultra 9 185H
 4 OpenVINO   |  1 CPU        |  0 Intel(R) Core(TM) Ultra 9 185H
 4 OpenVINO   |  2 GPU        |  1 Intel(R) Arc(TM) Graphics (iGPU)
Code Block
root@server5:/storage/apps/Geekbench/GeekbenchAI-1.3.0-Linux# ./banff --ai-framework 1
Geekbench AI 1.3.0 : https://www.geekbench.com/ai/

Geekbench AI requires an active internet connection and automatically uploads
benchmark results to the Geekbench Browser.

AI Information
  Framework                     TensorFlow Lite
  Backend                       CPU
  Device                        Intel(R) N150

System Information
  Operating System              Ubuntu 24.04.2 LTS
  Model                         GMKtec NucBoxG9
  Motherboard                   GMKtec GMKtec
  BIOS                          American Megatrends International, LLC. 5.27

CPU Information
  Name                          Intel(R) N150
  Topology                      1 Processor, 4 Cores
  Identifier                    GenuineIntel Family 6 Model 190 Stepping 0
  Base Frequency                3.60 GHz

Memory Information
  Size                          11.4 GB

  Running Image Classification (SP)
INFO: Initialized TensorFlow Lite runtime.
INFO: Applying 1 TensorFlow Lite delegate(s) lazily.
  Running Image Classification (HP)
INFO: Applying 1 TensorFlow Lite delegate(s) lazily.
  Running Image Classification (Q)
  Running Image Segmentation (SP)
INFO: Applying 1 TensorFlow Lite delegate(s) lazily.
  Running Image Segmentation (HP)
INFO: Applying 1 TensorFlow Lite delegate(s) lazily.
  Running Image Segmentation (Q)
  Running Pose Estimation (SP)
INFO: Applying 1 TensorFlow Lite delegate(s) lazily.
  Running Pose Estimation (HP)
INFO: Applying 1 TensorFlow Lite delegate(s) lazily.
  Running Pose Estimation (Q)
...



Code Block
root@server6:/mnt/GeekbenchAI-1.3.0-Linux#  python3 /usr/share/openvino/samples/python/hello_query_device/hello_query_device.py

[ INFO ] Available devices:
[ INFO ] CPU :
[ INFO ]        SUPPORTED_PROPERTIES:
[ INFO ]                AVAILABLE_DEVICES:
[ INFO ]                RANGE_FOR_ASYNC_INFER_REQUESTS: 1, 1, 1
[ INFO ]                RANGE_FOR_STREAMS: 1, 22
[ INFO ]                EXECUTION_DEVICES: CPU
[ INFO ]                FULL_DEVICE_NAME: Intel(R) Core(TM) Ultra 9 185H
[ INFO ]                OPTIMIZATION_CAPABILITIES: FP32, INT8, BIN, EXPORT_IMPORT
[ INFO ]                DEVICE_TYPE: Type.INTEGRATED
[ INFO ]                DEVICE_ARCHITECTURE: intel64
[ INFO ]                NUM_STREAMS: 1
[ INFO ]                INFERENCE_NUM_THREADS: 0
[ INFO ]                PERF_COUNT: False
[ INFO ]                INFERENCE_PRECISION_HINT: <Type: 'float32'>
[ INFO ]                PERFORMANCE_HINT: PerformanceMode.LATENCY
[ INFO ]                EXECUTION_MODE_HINT: ExecutionMode.PERFORMANCE
[ INFO ]                PERFORMANCE_HINT_NUM_REQUESTS: 0
[ INFO ]                ENABLE_CPU_PINNING: True
[ INFO ]                ENABLE_CPU_RESERVATION: False
[ INFO ]                SCHEDULING_CORE_TYPE: SchedulingCoreType.ANY_CORE
[ INFO ]                MODEL_DISTRIBUTION_POLICY: set()
[ INFO ]                ENABLE_HYPER_THREADING: True
[ INFO ]                DEVICE_ID:
[ INFO ]                CPU_DENORMALS_OPTIMIZATION: False
[ INFO ]                LOG_LEVEL: Level.NO
[ INFO ]                CPU_SPARSE_WEIGHTS_DECOMPRESSION_RATE: 1.0
[ INFO ]                DYNAMIC_QUANTIZATION_GROUP_SIZE: 32
[ INFO ]                KV_CACHE_PRECISION: <Type: 'uint8_t'>
[ INFO ]                KEY_CACHE_PRECISION: <Type: 'uint8_t'>
[ INFO ]                VALUE_CACHE_PRECISION: <Type: 'uint8_t'>
[ INFO ]                KEY_CACHE_GROUP_SIZE: 0
[ INFO ]                VALUE_CACHE_GROUP_SIZE: 0
[ INFO ]
[ INFO ] GPU :
[ INFO ]        SUPPORTED_PROPERTIES:
[ INFO ]                AVAILABLE_DEVICES: 0
[ INFO ]                RANGE_FOR_ASYNC_INFER_REQUESTS: 1, 2, 1
[ INFO ]                RANGE_FOR_STREAMS: 1, 2
[ INFO ]                OPTIMAL_BATCH_SIZE: 1
[ INFO ]                MAX_BATCH_SIZE: 1
[ INFO ]                DEVICE_ARCHITECTURE: GPU: vendor=0x8086 arch=v12.71.4
[ INFO ]                FULL_DEVICE_NAME: Intel(R) Arc(TM) Graphics (iGPU)
[ INFO ]                DEVICE_UUID: 8680557d080000000002000000000000
[ INFO ]                DEVICE_LUID: 409a0000499a0000
[ INFO ]                DEVICE_TYPE: Type.INTEGRATED
[ INFO ]                DEVICE_GOPS: {<Type: 'float16'>: 9625.599609375, <Type: 'float32'>: 4812.7998046875, <Type: 'int8_t'>: 19251.19921875, <Type: 'uint8_t'>: 19251.19921875}
[ INFO ]                OPTIMIZATION_CAPABILITIES: FP32, BIN, FP16, INT8, EXPORT_IMPORT
[ INFO ]                GPU_DEVICE_TOTAL_MEM_SIZE: 62438608896
[ INFO ]                GPU_UARCH_VERSION: 12.71.4
[ INFO ]                GPU_EXECUTION_UNITS_COUNT: 128
[ INFO ]                GPU_MEMORY_STATISTICS: {}
[ INFO ]                PERF_COUNT: False
[ INFO ]                MODEL_PRIORITY: Priority.MEDIUM
[ INFO ]                GPU_HOST_TASK_PRIORITY: Priority.MEDIUM
[ INFO ]                GPU_QUEUE_PRIORITY: Priority.MEDIUM
[ INFO ]                GPU_QUEUE_THROTTLE: Priority.MEDIUM
[ INFO ]                GPU_ENABLE_SDPA_OPTIMIZATION: True
[ INFO ]                GPU_ENABLE_LOOP_UNROLLING: True
[ INFO ]                GPU_DISABLE_WINOGRAD_CONVOLUTION: False
[ INFO ]                CACHE_DIR:
[ INFO ]                CACHE_MODE: CacheMode.OPTIMIZE_SPEED
[ INFO ]                PERFORMANCE_HINT: PerformanceMode.LATENCY
[ INFO ]                EXECUTION_MODE_HINT: ExecutionMode.PERFORMANCE
[ INFO ]                COMPILATION_NUM_THREADS: 22
[ INFO ]                NUM_STREAMS: 1
[ INFO ]                PERFORMANCE_HINT_NUM_REQUESTS: 0
[ INFO ]                INFERENCE_PRECISION_HINT: <Type: 'float16'>
[ INFO ]                ENABLE_CPU_PINNING: False
[ INFO ]                ENABLE_CPU_RESERVATION: False
[ INFO ]                DEVICE_ID: 0
[ INFO ]                DYNAMIC_QUANTIZATION_GROUP_SIZE: 0
[ INFO ]                ACTIVATIONS_SCALE_FACTOR: -1.0
[ INFO ]                WEIGHTS_PATH:
[ INFO ]                CACHE_ENCRYPTION_CALLBACKS: UNSUPPORTED TYPE
[ INFO ]                KV_CACHE_PRECISION: <Type: 'dynamic'>
[ INFO ]                MODEL_PTR: UNSUPPORTED TYPE
[ INFO ]
[ INFO ] NPU :
[ INFO ]        SUPPORTED_PROPERTIES:
[ INFO ]                AVAILABLE_DEVICES: 3720
[ INFO ]                CACHE_DIR:
[ INFO ]                COMPILATION_NUM_THREADS: 22
[ INFO ]                DEVICE_ARCHITECTURE: 3720
[ INFO ]                DEVICE_GOPS: {<Type: 'bfloat16'>: 0.0, <Type: 'float16'>: 4300.7998046875, <Type: 'float32'>: 0.0, <Type: 'int8_t'>: 8601.599609375, <Type: 'uint8_t'>: 8601.599609375}
[ INFO ]                DEVICE_ID:
[ INFO ]                DEVICE_PCI_INFO: {domain: 0 bus: 0 device: 0xb function: 0}
[ INFO ]                DEVICE_TYPE: Type.INTEGRATED
[ INFO ]                DEVICE_UUID: 80d1d11eb73811eab3de0242ac130004
[ INFO ]                ENABLE_CPU_PINNING: False
[ INFO ]                EXECUTION_DEVICES: NPU
[ INFO ]                EXECUTION_MODE_HINT: ExecutionMode.PERFORMANCE
[ INFO ]                FULL_DEVICE_NAME: Intel(R) AI Boost
[ INFO ]                INFERENCE_PRECISION_HINT: <Type: 'float16'>
[ INFO ]                LOG_LEVEL: Level.ERR
[ INFO ]                MODEL_PRIORITY: Priority.MEDIUM
[ INFO ]                NPU_BYPASS_UMD_CACHING: False
[ INFO ]                NPU_COMPILATION_MODE_PARAMS:
[ INFO ]                NPU_COMPILER_DYNAMIC_QUANTIZATION: False
[ INFO ]                NPU_COMPILER_VERSION: 458772
[ INFO ]                NPU_DEFER_WEIGHTS_LOAD: False
[ INFO ]                NPU_DEVICE_ALLOC_MEM_SIZE: 0
[ INFO ]                NPU_DEVICE_TOTAL_MEM_SIZE: 66926030848
[ INFO ]                NPU_DRIVER_VERSION: 1746727061
[ INFO ]                NPU_MAX_TILES: 2
[ INFO ]                NPU_QDQ_OPTIMIZATION: False
[ INFO ]                NPU_TILES: -1
[ INFO ]                NPU_TURBO: False
[ INFO ]                NUM_STREAMS: 1
[ INFO ]                OPTIMAL_NUMBER_OF_INFER_REQUESTS: 1
[ INFO ]                OPTIMIZATION_CAPABILITIES: FP16, INT8, EXPORT_IMPORT
[ INFO ]                PERFORMANCE_HINT: PerformanceMode.LATENCY
[ INFO ]                PERFORMANCE_HINT_NUM_REQUESTS: 1
[ INFO ]                PERF_COUNT: False
[ INFO ]                RANGE_FOR_ASYNC_INFER_REQUESTS: 1, 10, 1
[ INFO ]                RANGE_FOR_STREAMS: 1, 4
[ INFO ]                WEIGHTS_PATH:
[ INFO ]                WORKLOAD_TYPE: WorkloadType.DEFAULT

...