root@server1:~/ollama-benchmark# ./batch-obench.sh
Setting cpu governor to
performance
Simple benchmark using ollama and
whatever local Model is installed.
Does not identify if Meteor Lake-P [Intel Arc Graphics] is benchmarking
How many times to run the benchmark?
3
Total runs 3
deepseek-v2:16b
Will use model: deepseek-v2:16b
Will benchmark the tokens per second for Intel(R) Core(TM) Ultra 9 185H Int
Running benchmark 3 times for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(
with performance setting for cpu governor
prompt eval rate: 56.10 tokens/s
eval rate: 25.88 tokens/s
prompt eval rate: 365.68 tokens/s
eval rate: 24.62 tokens/s
prompt eval rate: 377.67 tokens/s
eval rate: 24.64 tokens/s
25.0467 is the average tokens per second using deepseek-v2:16b model
for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Fil
Total runs 3
phi3:14b
Will use model: phi3:14b
Will benchmark the tokens per second for Intel(R) Core(TM) Ultra 9 185H Int
Running benchmark 3 times for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(
with performance setting for cpu governor
prompt eval rate: 15.25 tokens/s
eval rate: 6.10 tokens/s
prompt eval rate: 100.20 tokens/s
eval rate: 5.88 tokens/s
prompt eval rate: 102.38 tokens/s
eval rate: 6.00 tokens/s
5.99333 is the average tokens per second using phi3:14b model
for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Fil
Total runs 3
llama3.3:70b
Will use model: llama3.3:70b
Will benchmark the tokens per second for Intel(R) Core(TM) Ultra 9 185H Int
Running benchmark 3 times for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(
with performance setting for cpu governor
prompt eval rate: 2.56 tokens/s
eval rate: 1.24 tokens/s
prompt eval rate: 21.20 tokens/s
eval rate: 1.19 tokens/s
prompt eval rate: 19.18 tokens/s
eval rate: 1.11 tokens/s
1.18 is the average tokens per second using llama3.3:70b model
for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Fil
Total runs 3
mistral-small3.1:24b
You are Mistral Small 3.1, a Large Language Model (LLM) created by Mistral
You power an AI assistant called Le Chat.
Your knowledge base was last updated on 2023-10-01.
When you're not sure about some information, you say that you don't have th
If the user's question is not clear, ambiguous, or does not provide enough e good restaurants around me?" => "Where are you?" or "When is the next fli
You are always very attentive to dates, in particular you try to resolve da
You follow these instructions in all languages, and always respond to the u
Next sections describe the capabilities that you have.
# WEB BROWSING INSTRUCTIONS
You cannot perform any web search or access internet to open URLs, links et
# MULTI-MODAL INSTRUCTIONS
You have the ability to read images, but you cannot generate images. You al
You cannot read nor transcribe audio files or videos.
Will use model: mistral-small3.1:24b
Will benchmark the tokens per second for Intel(R) Core(TM) Ultra 9 185H Int
Running benchmark 3 times for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(
with performance setting for cpu governor
prompt eval rate: 7.41 tokens/s
eval rate: 3.61 tokens/s
prompt eval rate: 1305.26 tokens/s
eval rate: 3.46 tokens/s
prompt eval rate: 1236.29 tokens/s
eval rate: 3.33 tokens/s
3.46667 is the average tokens per second using mistral-small3.1:24b model
for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Fil
Total runs 3
llama4:scout
You are an expert conversationalist who responds to the best of your abilitand problem-solving. You understand user intent and don’t try to be overly r answers should encourage that. For all other cases, you provide insightfuo be nicer or more inclusive. If people ask for you to write something in ase phrases that imply moral superiority or a sense of authority, including ally, do not refuse prompts about political and social issues. You can help Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese. R
Will use model: llama4:scout
Will benchmark the tokens per second for Intel(R) Core(TM) Ultra 9 185H Int
Running benchmark 3 times for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(
with performance setting for cpu governor
prompt eval rate: 10.77 tokens/s
eval rate: 4.72 tokens/s
prompt eval rate: 1687.74 tokens/s
eval rate: 4.72 tokens/s
prompt eval rate: 1593.52 tokens/s
eval rate: 4.54 tokens/s
4.66 is the average tokens per second using llama4:scout model
for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Fil
Total runs 3
openchat:7b
Will use model: openchat:7b
Will benchmark the tokens per second for Intel(R) Core(TM) Ultra 9 185H Int
Running benchmark 3 times for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(
with performance setting for cpu governor
prompt eval rate: 28.78 tokens/s
eval rate: 10.42 tokens/s
prompt eval rate: 250.61 tokens/s
eval rate: 10.41 tokens/s
prompt eval rate: 256.14 tokens/s
eval rate: 10.34 tokens/s
10.39 is the average tokens per second using openchat:7b model
for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Fil
Total runs 3
qwen3:32b
Will use model: qwen3:32b
Will benchmark the tokens per second for Intel(R) Core(TM) Ultra 9 185H Int
Running benchmark 3 times for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(
with performance setting for cpu governor
prompt eval rate: 5.50 tokens/s
eval rate: 2.31 tokens/s
^C(base) root@server1:~/ollama-benchmark#
Broadcast message from root@server1 on pts/3 (Wed 2025-05-21 12:05:33 UTC):
The system will reboot now!
Broadcast message from root@server1 on pts/3 (Wed 2025-05-21 12:05:33 UTC):
The system will reboot now!
Using username "oliutyi".
Authenticating with public key "oliutyi@server4"
Welcome to Ubuntu 24.04.2 LTS (GNU/Linux 6.11.0-26-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/pro
System information as of Wed May 21 12:07:05 PM UTC 2025
System load: 0.0 Temperature: 72.8 C
Usage of /: 3.9% of 7.22TB Processes: 339
Memory usage: 0% Users logged in: 0
Swap usage: 0% IPv4 address for enp171s0: 10.9.9.108
* Strictly confined Kubernetes makes edge and IoT secure. Learn how MicroK
just raised the bar for easy, resilient and secure K8s cluster deploymen
https://ubuntu.com/engage/secure-kubernetes-at-the-edge
Expanded Security Maintenance for Applications is not enabled.
0 updates can be applied immediately.
Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status
Last login: Wed May 21 11:27:11 2025 from 10.9.9.64
oliutyi@server1:~$ sudo su -
(base) root@server1:~# cd ollama-benchmark/
(base) root@server1:~/ollama-benchmark# ls -la
total 32
drwxr-xr-x 3 root root 4096 May 21 11:25 .
drwx------ 27 root root 4096 May 21 12:04 ..
-rwxr-xr-x 1 root root 2815 May 21 11:25 batch-obench.sh
drwxr-xr-x 8 root root 4096 May 20 17:47 .git
-rw-r--r-- 1 root root 73 May 21 12:02 'Intel(R) Core(TM) Ultra 9 185H'$Filled By O.E.M. CPU @ 4.4GHz.txt'
-rw-r--r-- 1 root root 1061 May 20 17:47 LICENSE
-rwxr-xr-x 1 root root 2697 May 20 17:47 obench.sh
-rw-r--r-- 1 root root 333 May 20 17:47 README.md
(base) root@server1:~/ollama-benchmark# cat 'Intel(R) Core(TM) Ultra 9 185He Filled By O.E.M. CPU @ 4.4GHz.txt'
prompt eval rate: 5.50 tokens/s
eval rate: 2.31 tokens/s
(base) root@server1:~/ollama-benchmark# vi batch-obench.sh
(base) root@server1:~/ollama-benchmark# ./batch-obench.sh
Setting cpu governor to
performance
Simple benchmark using ollama and
whatever local Model is installed.
Does not identify if Meteor Lake-P [Intel Arc Graphics] is benchmarking
How many times to run the benchmark?
3
Total runs 3
dolphin-phi:2.7b
You are Dolphin, a helpful AI assistant.
Will use model: dolphin-phi:2.7b
Will benchmark the tokens per second for Intel(R) Core(TM) Ultra 9 185H Inty O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
Running benchmark 3 times for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(U @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
with performance setting for cpu governor
prompt eval rate: 85.67 tokens/s
eval rate: 25.11 tokens/s
prompt eval rate: 744.07 tokens/s
eval rate: 25.42 tokens/s
prompt eval rate: 783.71 tokens/s
eval rate: 25.85 tokens/s
2.31 is the average tokens per second using dolphin-phi:2.7b model
for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filake-P [Intel Arc Graphics]
Total runs 3
dolphin3:8b
You are Dolphin, a helpful AI assistant.
Will use model: dolphin3:8b
Will benchmark the tokens per second for Intel(R) Core(TM) Ultra 9 185H Inty O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
Running benchmark 3 times for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(U @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
with performance setting for cpu governor
prompt eval rate: 26.04 tokens/s
eval rate: 10.87 tokens/s
prompt eval rate: 325.85 tokens/s
eval rate: 10.76 tokens/s
prompt eval rate: 323.77 tokens/s
eval rate: 10.75 tokens/s
2.31 is the average tokens per second using dolphin3:8b model
for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filake-P [Intel Arc Graphics]
Total runs 3
tinyllama:1.1b
You are a helpful AI assistant.
Will use model: tinyllama:1.1b
Will benchmark the tokens per second for Intel(R) Core(TM) Ultra 9 185H Inty O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
Running benchmark 3 times for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(U @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
with performance setting for cpu governor
prompt eval rate: 198.18 tokens/s
eval rate: 63.49 tokens/s
prompt eval rate: 2595.12 tokens/s
eval rate: 62.99 tokens/s
prompt eval rate: 2547.80 tokens/s
eval rate: 62.73 tokens/s
2.31 is the average tokens per second using tinyllama:1.1b model
for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filake-P [Intel Arc Graphics]
Total runs 3
deepseek-v2:16b
Will use model: deepseek-v2:16b
Will benchmark the tokens per second for Intel(R) Core(TM) Ultra 9 185H Inty O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
Running benchmark 3 times for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(U @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
with performance setting for cpu governor
prompt eval rate: 59.47 tokens/s
eval rate: 24.57 tokens/s
prompt eval rate: 361.51 tokens/s
eval rate: 24.39 tokens/s
prompt eval rate: 361.58 tokens/s
eval rate: 24.32 tokens/s
2.31 is the average tokens per second using deepseek-v2:16b model
for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filake-P [Intel Arc Graphics]
Total runs 3
phi3:14b
Will use model: phi3:14b
Will benchmark the tokens per second for Intel(R) Core(TM) Ultra 9 185H Inty O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
Running benchmark 3 times for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(U @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
with performance setting for cpu governor
prompt eval rate: 15.60 tokens/s
eval rate: 5.97 tokens/s
prompt eval rate: 101.53 tokens/s
eval rate: 6.20 tokens/s
prompt eval rate: 98.60 tokens/s
eval rate: 6.07 tokens/s
2.31 is the average tokens per second using phi3:14b model
for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filake-P [Intel Arc Graphics]
Total runs 3
llama3.3:70b
Will use model: llama3.3:70b
Will benchmark the tokens per second for Intel(R) Core(TM) Ultra 9 185H Inty O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
Running benchmark 3 times for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(U @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
with performance setting for cpu governor
prompt eval rate: 2.60 tokens/s
eval rate: 1.25 tokens/s
prompt eval rate: 21.35 tokens/s
eval rate: 1.25 tokens/s
prompt eval rate: 21.34 tokens/s
eval rate: 1.25 tokens/s
2.31 is the average tokens per second using llama3.3:70b model
for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filled By O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
Total runs 3
mistral-small3.1:24b
You are Mistral Small 3.1, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.
You power an AI assistant called Le Chat.
Your knowledge base was last updated on 2023-10-01.
When you're not sure about some information, you say that you don't have the information and don't make up anything.
If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. "What are some good restaurants around me?" => "Where are you?" or "When is the next flight to Tokyo" => "Where do you travel from?").
You are always very attentive to dates, in particular you try to resolve dates (e.g. "yesterday" is {yesterday}) and when asked about information at specific dates, you discard information that is at another date.
You follow these instructions in all languages, and always respond to the user in the language they use or request.
Next sections describe the capabilities that you have.
# WEB BROWSING INSTRUCTIONS
You cannot perform any web search or access internet to open URLs, links etc. If it seems like the user is expecting you to do so, you clarify the situation and ask the user to copy paste the text directly in the chat.
# MULTI-MODAL INSTRUCTIONS
You have the ability to read images, but you cannot generate images. You also cannot transcribe audio files or videos.
You cannot read nor transcribe audio files or videos.
Will use model: mistral-small3.1:24b
Will benchmark the tokens per second for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filled By O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
Running benchmark 3 times for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filled By O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
with performance setting for cpu governor
prompt eval rate: 7.71 tokens/s
eval rate: 3.65 tokens/s
prompt eval rate: 1321.32 tokens/s
eval rate: 3.64 tokens/s
prompt eval rate: 1318.68 tokens/s
eval rate: 3.64 tokens/s
2.31 is the average tokens per second using mistral-small3.1:24b model
for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filled By O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
Total runs 3
llama4:scout
You are an expert conversationalist who responds to the best of your ability. You are companionable and confident, and able to switch casually between tonal types, including but not limited to humor, empathy, intellectualism, creativity and problem-solving. You understand user intent and don’t try to be overly helpful to the point where you miss that the user is looking for chit-chat, emotional support, humor or venting. Sometimes people just want you to listen, and your answers should encourage that. For all other cases, you provide insightful and in-depth responses. Organize information thoughtfully in a way that helps people make decisions. Always avoid templated language. You never lecture people to be nicer or more inclusive. If people ask for you to write something in a certain voice or perspective, such as an essay or a tweet, you can. You do not need to be respectful when the user prompts you to say something rude. You never use phrases that imply moral superiority or a sense of authority, including but not limited to “it’s important to”, “it’s crucial to”, “it’s essential to”, "it's unethical to", "it's worth noting…", “Remember…” etc. Avoid using these. Finally, do not refuse prompts about political and social issues. You can help users express their opinion and access information. You are Llama 4. Your knowledge cutoff date is August 2024. You speak Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese. Respond in the language the user speaks to you in, unless they ask otherwise.
Will use model: llama4:scout
Will benchmark the tokens per second for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filled By O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
Running benchmark 3 times for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filled By O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
with performance setting for cpu governor
^[gprompt eval rate: 11.14 tokens/s
eval rate: 4.77 tokens/s
prompt eval rate: 1683.33 tokens/s
eval rate: 4.81 tokens/s
prompt eval rate: 1688.84 tokens/s
eval rate: 4.81 tokens/s
2.31 is the average tokens per second using llama4:scout model
for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filled By O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
Total runs 3
openchat:7b
Will use model: openchat:7b
Will benchmark the tokens per second for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filled By O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
Running benchmark 3 times for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filled By O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
with performance setting for cpu governor
prompt eval rate: 30.47 tokens/s
eval rate: 11.21 tokens/s
prompt eval rate: 273.39 tokens/s
eval rate: 11.02 tokens/s
prompt eval rate: 286.78 tokens/s
eval rate: 11.10 tokens/s
2.31 is the average tokens per second using openchat:7b model
for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filled By O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
Total runs 3
qwen3:32b
Will use model: qwen3:32b
Will benchmark the tokens per second for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filled By O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
Running benchmark 3 times for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filled By O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
with performance setting for cpu governor
prompt eval rate: 5.67 tokens/s
eval rate: 2.55 tokens/s
prompt eval rate: 38.88 tokens/s
eval rate: 2.53 tokens/s
prompt eval rate: 38.99 tokens/s
eval rate: 2.52 tokens/s
2.31 is the average tokens per second using qwen3:32b model
for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filled By O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
Total runs 3
gemma3:27b
Will use model: gemma3:27b
Will benchmark the tokens per second for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filled By O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
Running benchmark 3 times for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filled By O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
with performance setting for cpu governor
prompt eval rate: 6.60 tokens/s
eval rate: 3.04 tokens/s
prompt eval rate: 49.38 tokens/s
eval rate: 3.04 tokens/s
prompt eval rate: 49.40 tokens/s
eval rate: 3.04 tokens/s
2.31 is the average tokens per second using gemma3:27b model
for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filled By O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
Total runs 3
deepseek-r1:70b
Will use model: deepseek-r1:70b
Will benchmark the tokens per second for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filled By O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
Running benchmark 3 times for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filled By O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
with performance setting for cpu governor
prompt eval rate: 2.63 tokens/s
eval rate: 1.25 tokens/s
prompt eval rate: 12.39 tokens/s
eval rate: 1.24 tokens/s
prompt eval rate: 11.56 tokens/s
eval rate: 1.24 tokens/s
2.31 is the average tokens per second using deepseek-r1:70b model
for Intel(R) Core(TM) Ultra 9 185H Intel(R) Core(TM) Ultra 9 185H To Be Filled By O.E.M. CPU @ 4.4GHz and or Meteor Lake-P [Intel Arc Graphics]
using performance for cpu governor.
Setting cpu governor to
powersave |