| framework | Backend | Device | Single | Half | Quant. | Results |
|---|---|---|---|---|---|---|
TensorFlow Lite | CPU | Intel(R) N150 | ||||
ONNX | CPU | Intel(R) N150 | ||||
OpenVINO | CPU | Intel(R) N150 |
| Workload (TF Lite) | Accuracy | Score | Workload (ONNX) | Accuracy | Score | Workload (OpenVINO) | Accuracy | Score |
|---|---|---|---|---|---|---|---|---|
| Image Classification (SP) | Image Classification (SP) | Image Classification (SP) | ||||||
| Image Classification (HP) | Image Classification (HP) | Image Classification (HP) | ||||||
| Image Classification (Q) | Image Classification (Q) | Image Classification (Q) | ||||||
| Image Segmentation (SP) | Image Segmentation (SP) | Image Segmentation (SP) | ||||||
| Image Segmentation (HP) | Image Segmentation (HP) | Image Segmentation (HP) | ||||||
| Image Segmentation (Q) | Image Segmentation (Q) | Image Segmentation (Q) | ||||||
| Pose Estimation (SP) | Pose Estimation (SP) | Pose Estimation (SP) | ||||||
| Pose Estimation (HP) | Pose Estimation (HP) | Pose Estimation (HP) | ||||||
| Pose Estimation (Q) | Pose Estimation (Q) | Pose Estimation (Q) | ||||||
| Object Detection (SP) | Object Detection (SP) | Object Detection (SP) | ||||||
| Object Detection (HP) | Object Detection (HP) | Object Detection (HP) | ||||||
| Object Detection (Q) | Object Detection (Q) | Object Detection (Q) | ||||||
| Face Detection (SP) | Face Detection (SP) | Face Detection (SP) | ||||||
| Face Detection (HP) | Face Detection (HP) | Face Detection (HP) | ||||||
| Face Detection (Q) | Face Detection (Q) | Face Detection (Q) | ||||||
| Depth Estimation (SP) | Depth Estimation (SP) | Depth Estimation (SP) | ||||||
| Depth Estimation (HP) | Depth Estimation (HP) | Depth Estimation (HP) | ||||||
| Depth Estimation (Q) | Depth Estimation (Q) | Depth Estimation (Q) | ||||||
| Style Transfer (SP) | Style Transfer (SP) | Style Transfer (SP) | ||||||
| Style Transfer (HP) | Style Transfer (HP) | Style Transfer (HP) | ||||||
| Style Transfer (Q) | Style Transfer (Q) | Style Transfer (Q) | ||||||
| Image Super-Resolution (SP) | Image Super-Resolution (SP) | Image Super-Resolution (SP) | ||||||
| Image Super-Resolution (HP) | Image Super-Resolution (HP) | Image Super-Resolution (HP) | ||||||
| Image Super-Resolution (Q) | Image Super-Resolution (Q) | Image Super-Resolution (Q) | ||||||
| Text Classification (SP) | Text Classification (SP) | Text Classification (SP) | ||||||
| Text Classification (HP) | Text Classification (HP) | Text Classification (HP) | ||||||
| Text Classification (Q) | Text Classification (Q) | Text Classification (Q) | ||||||
| Machine Translation (SP) | Machine Translation (SP) | Machine Translation (SP) | ||||||
| Machine Translation (HP) | Machine Translation (HP) | Machine Translation (HP) | ||||||
| Machine Translation (Q) | Machine Translation (Q) | Machine Translation (Q) |
Install
# OpenVino wget https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB sudo apt-key add GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB echo "deb https://apt.repos.intel.com/openvino/2025 ubuntu24 main" | sudo tee /etc/apt/sources.list.d/intel-openvino-2025.list apt update apt-cache search openvino apt install openvino python3 /usr/share/openvino/samples/python/hello_query_device/hello_query_device.py # NPU driver git clone https://github.com/intel/linux-npu-driver.git cd linux-npu-driver/ apt install -y build-essential git git-lfs cmake python3 git submodule update --init --recursive cmake -B build -S . cmake --build build --parallel $(nproc) cmake --install build rmmod intel_vpu modprobe intel_vpu cmake -B build -S . cmake --install build/ --component fw-npu --prefix / #Geekbench mkdir Geekbench cd Geekbench wget https://cdn.geekbench.com/GeekbenchAI-1.3.0-Linux.tar.gz tar xvf GeekbenchAI-1.3.0-Linux.tar.gz cd GeekbenchAI-1.3.0-Linux/
Check Available frameworks
root@server4:~/Geekbench/GeekbenchAI-1.3.0-Linux# ./banff --ai-list Geekbench AI 1.3.0 : https://www.geekbench.com/ai/ Geekbench AI requires an active internet connection and automatically uploads benchmark results to the Geekbench Browser. Framework | Backend | Device 1 TensorFlow Lite | 1 CPU | 0 Intel Core i7-1355U 3 ONNX | 1 CPU | 0 Intel Core i7-1355U 4 OpenVINO | 1 CPU | 0 13th Gen Intel(R) Core(TM) i7-1355U 4 OpenVINO | 2 GPU | 1 Intel(R) Iris(R) Xe Graphics (iGPU)
help
root@server5:/storage/apps/Geekbench/GeekbenchAI-1.3.0-Linux# ./banff --help Geekbench AI 1.3.0 : https://www.geekbench.com/ai/ Usage: ./banff [ options ] Options: -h, --help print this message AI Benchmark Options: --ai run the AI benchmark --ai-framework [ID] use AI framework ID --ai-backend [ID] use AI backend ID --ai-device [ID] use AI device ID --ai-list list available AI settings If no options are given, the default action is to run the inference benchmark.
example run
root@server4:~/linux-npu-driver# modinfo intel_vpu filename: /lib/modules/6.8.0-57-generic/kernel/drivers/accel/ivpu/intel_vpu.ko.zst version: 1.0. license: GPL and additional rights description: Driver for Intel NPU (Neural Processing Unit) author: Intel Corporation firmware: intel/vpu/vpu_40xx_v0.0.bin firmware: intel/vpu/vpu_37xx_v0.0.bin srcversion: 853217D6461C2C5899F4F14 alias: pci:v00008086d0000643Esv*sd*bc*sc*i* alias: pci:v00008086d0000AD1Dsv*sd*bc*sc*i* alias: pci:v00008086d00007D1Dsv*sd*bc*sc*i* depends: retpoline: Y intree: Y name: intel_vpu