Compare commits

..

565 Commits

Author SHA1 Message Date
likelovewant
2dd3f3c67c Merge branch 'ollama:main' into main 2025-12-05 12:25:10 +08:00
Jesse Gross
9191dfaf05 llm: Enable flash attention for mistral3 by default 2025-12-04 15:19:06 -08:00
Jesse Gross
1108d8b34e ggml: Enable flash attention for vision encoders
Although the vision component of multimodal models typically already
call the optimized nn.Attention, it is converted into non-fused
operations. That is because the backend-specific fused kernels may
have requirements, such as padding, and they is performed by the
cache, which vision encoders don't use.

This implements a fallback path in the backend, softening the
requirements into optimizations. In turn, this allows flash attention
to be used for vision encoders, saving a significant amount of VRAM
and improving performance.
2025-12-04 15:19:06 -08:00
Jesse Gross
7837a5bc7e ggml: Always set cache padding to 256
We currently use cache padding of 32 when not using flash attention
and 256 with flash attention, which is based on the historic alignment
requirements of these kernels. The restrictions have since been
loosened but there are still performance benefits, such as better
CUDA graph reuse.

Since the requirement is no longer kernel-specific, set the padding
uniformly to 256, as llama.cpp has.
2025-12-04 15:19:06 -08:00
Patrick Devine
0a844f8e96 convert: add deepseek converter (#12980)
This change adds the ability for `ollama create` to convert models that use
the DeepSeek2 architecture (specifically DeepSeekV3 and DeepSeek-R1).
2025-12-04 13:49:30 -08:00
Eloi Torrents
a03223b86f cmd/bench: support writing benchmark output to file (#13263)
* cmd/bench: support writing benchmark output to file

This changes Ollama to allow the bench command to write benchmark
results to a user-specified output file instead of stdout when the
--output flag is provided.

---------

Co-authored-by: Patrick Devine <patrick@infrahq.com>
2025-12-04 13:22:41 -08:00
Daniel Hiltgen
0cf7794b16 ggml update to b7108 (#12992)
* Revert "vulkan: temporary cary of vulkan fixes (#12971)"

This reverts commit 3a9e8e9fd4.

* ggml update to b7087

* fix argsort on metal

* update to b7108

* fix bakllava regression

This model lacks the metadata for the projector type.

* update to b7209

* fix TopK perf

* only build arm code on arm
2025-12-03 19:43:29 -08:00
Jeffrey Morgan
854d40edc5 ci: restore previous linter rules (#13322) 2025-12-03 18:55:02 -08:00
Bruce MacDonald
84a2cedf18 app: relay thinking false to server (#13319)
This fixes a bug where disabling thinking on deepseek-v3.1 did not stop the model from thinking.

When thinking is not defined it should not be sent to the server since this will cause error responses in some cases where the model does not support thinking. However if it is defined as false it should still be sent.
2025-12-03 15:06:55 -08:00
Daniel Hiltgen
3f30836734 CUDA: filter devices on secondary discovery (#13317)
We now do a deeper probe of CUDA devices to verify the library version has
the correct compute capability coverage for the device.  Due to ROCm also
interpreting the CUDA env var to filter AMD devices, we try to avoid setting
it which leads to problems in mixed vendor systems.  However without setting
it for this deeper probe, each CUDA library subprocess discovers all CUDA GPUs
and on systems with lots of GPUs, this can lead to hitting timeouts.  The fix is
to turn on the CUDA visibility env var just for this deeper probe use-case.
2025-12-03 12:58:16 -08:00
Nathan Hook
cc9555aff0 Update user message format for temperature query (#13256) 2025-12-02 15:08:39 -08:00
hello_world
20aee96706 Add Vulkan GPU support instructions in development.md (#13265)
Added Vulkan SDK installation instructions and environment variable setup for building with Vulkan support.
2025-12-02 13:37:32 -08:00
Daniel Hiltgen
18b5958d46 test: avoid ministral tools test on low vram (#13302)
Avoid hitting test timeouts
2025-12-02 13:18:55 -08:00
Jesse Gross
5317202c38 llm: Don't always evict models on CPU-only systems
Model eviction happens when we have at least one other model
loaded and are unable to load all layers into VRAM. However, on
CPU-only systems we can never load layers into VRAM, so this
constantly triggered eviction.

Fixes #13227
2025-12-02 10:58:08 -08:00
Daniel Hiltgen
d771043e88 test: add ministral-3 (#13300) 2025-12-02 09:52:16 -08:00
Daniel Hiltgen
f8f1071818 CUDA: verify CC is supported by target library (#13298) 2025-12-02 09:28:41 -08:00
Patrick Devine
d3e0a0dee4 model: ministral w/ llama4 scaling (#13292)
This change:

* fixes rope scaling in the mistral converter
* updates ministral to include llama4 scaling
* includes a new ministral parser for parsing reasoning and tool calling

---------

Co-authored-by: jmorganca <jmorganca@gmail.com>
2025-12-01 23:20:14 -08:00
Daniel Hiltgen
554172759c win: warn if ggml-base detected in PATH (#13289)
If the user has somehow installed another GGML based app which places a
ggml-base lib somewhere in their PATH, we can experience runtime problems
due to incompatibilities.  This change adds a warning message if we detect
a ggml-base outside of our install location to aid in troubleshooting.
2025-12-01 15:36:47 -08:00
Bruce MacDonald
5b6a8e6001 api/client: handle non-json streaming errors (#13007)
While processing the response stream during a chat or generation if an error is occurred it is parsed and returned to the user. The issue with the existing code is that this assumed the response would be valid JSON, which is not a safe assumption and caused cryptic error messages to be displayed due to parsing failures:
`invalid character 'i' looking for beginning of value`

This change updates the stream function to return the raw error string if it cant be parsed as JSON. This should help with debugging issues by making sure the actual error reaches the user.
2025-12-01 15:10:16 -08:00
Daniel Hiltgen
467bbc0dd5 jetpack: require exact match or skip cuda_jetpack* (#13288)
The cuda_jetpack libs will enumerate discrete GPUs on SBSA systems
which leads to runtime failures of missing kernels.  This fix
requires an exact match to enable jetpacks instead of relying on
enumeration to filter out supported libraries.
2025-12-01 12:48:16 -08:00
Jeffrey Morgan
6d9f9323c5 .gitattributes: add app/webview to linguist-vendored (#13274) 2025-11-29 23:46:10 -05:00
Ondrej Kokes
0c2489605d docs: fix output formatting in faq.mdx (#13231)
There were a few Markdown typos in one FAQ answer. It now renders as a proper ascii table.
2025-11-28 19:19:21 -05:00
EntropyYue
8b1b89a984 docs: remove deprecated parameters (#13237) 2025-11-26 11:03:09 +09:00
likelovewant
58a46a6e73 Merge branch 'ollama:main' into main 2025-11-22 17:42:25 +08:00
Eva H
47e272c35a app/cmd: update ollama help to navigate to ollama doc instead of github page (#13174) 2025-11-20 16:30:35 -05:00
Jeffrey Morgan
417a81fda3 app: open app instead of always navigating to / on connect (#13164) 2025-11-20 12:59:17 -08:00
Daniel Hiltgen
dba62ff3a5 discovery: fix cuda overlap case (#13176)
Recent refactoring introduced a regression for filtering cuda overlap to favor newest supported version.
2025-11-20 12:15:37 -08:00
Grace
d70e935526 Parser for Cogito v2 (#13145) 2025-11-19 17:21:07 -08:00
Michael Yang
5c1063df7f deepseek2: upgrade to run v3+ models (#13166)
the check for mla omits v3 and r1 which should not return unsupported.
instead check the tokenizer for compatibility
2025-11-19 17:05:39 -08:00
Jesse Gross
cb485b2019 kvcache: Run tests both with and without PermutedV
The causal cache can store data differently depending on what is
best for the backend. We should run tests both ways.
2025-11-19 16:45:30 -08:00
nicole pardal
b2af50960f nomic-embed: nomic-embed-text defaulted to ollama runner (#13144) 2025-11-19 13:03:44 -08:00
Michael Yang
eac5b8bfbd chore: mark vulkan shaders as vendored files 2025-11-19 12:01:23 -08:00
Patrick Devine
604e43b28d models: enable deepseek2 (deepseek v3.1 w/ MLA) on the new engine (#13151) 2025-11-18 22:03:50 -08:00
Jesse Gross
53985b3c4d kvcache: Use SetRows to store cache data
We currently copy data into the KV cache in contiguous buffers using
ggml_cpy(). ggml_set_rows() was introduced to allow scatter operation
so that contiguous buffers are no longer required. The direct primary
benefit of this is that we no longer need to perform defragmentation.

However, GGML recently removed an optimization for ggml_cpy() and
we picked it up in 544b673 "ggml update to b6840 (#12791)". This
caused a roughly 40% drop in token generation performance on CUDA
due to CUDA graphs no longer being used. By switching to
ggml_set_rows(), the original optimization is no longer necessary
and CUDA performance is restored.

Fixes #13112
2025-11-18 20:42:28 -08:00
Jesse Gross
b6e02cbbd2 ggml: Automatically make tensors contiguous on reshape
GGML requires tensors to be contiguous for reshape and if
this is not the case, it will assert fail. Contiguous is an
expensive operation, so it's best to do it lazily when it is
actually required rather than ahead of time when it may not
be needed.
2025-11-18 20:42:28 -08:00
Grace
91935631ac Renderer for Cogito v2 (#13139) 2025-11-18 19:06:34 -08:00
nicole pardal
8de30b568a nomic-embed-text model implementation (#13071) 2025-11-18 18:28:10 -08:00
Daniel Hiltgen
485da9fd35 win: exit instead of abort (#13138)
Calling abort on windows triggers the C++ runtime to attempt a debugger
attach, which causes the crashed runners to hang instead of exit, leading
to a timeout instead of a fast failure during discovery.
2025-11-18 16:33:33 -08:00
Michael Yang
0796d79d19 cuda: skip large batches
cuda panics on batches larger than 1024 so skip those and fallback to
cpu
2025-11-18 16:11:37 -08:00
Michael Yang
92981ae3f2 deepseekocr 2025-11-18 16:11:37 -08:00
Lhiam Andrei Lingco
8ed1adf3db docs: fix typo in vscode.mdx (#13116) 2025-11-18 13:18:42 -08:00
Michael Yang
440a3823a6 fix(tokenizer): add special tokens to empty inputs (#13091) 2025-11-18 11:16:56 -08:00
Michael Yang
718961de68 migrate to golangci-lint v2 (#13109)
* migrate to golangci-lint v2
* copyloopvar
2025-11-18 11:00:26 -08:00
SamareshSingh
330f62a7fa docs: add Void Editor to community integrations (#13124)
Void is an open source AI code editor and Cursor alternative that supports
Ollama. It's built on VS Code and allows users to connect directly to Ollama
for private LLM usage without going through a middleman backend.

Key features:
- Open source Cursor alternative
- Direct Ollama integration
- VS Code fork with full compatibility
- Agent mode and MCP support
- Works with any open source model

Fixes #12919

Signed-off-by: Samaresh Kumar Singh <ssam3003@gmail.com>
2025-11-17 19:20:36 -08:00
Grace
584e2d646f Add deepseek v3.1 (#13063)
* Add mla for flash attention
* Revert to using chunks
2025-11-17 18:03:21 -08:00
Eva H
1fd4cb87b2 app/cmd: restrict ollama:// URL scheme to supported paths (#13120) 2025-11-17 20:10:45 -05:00
Cerussite
4aba2e8b72 discover: Support cgroups cores and memory limitations (#10292)
* Add supports for cgroups cores and memory limitations

* fix compile error and add logs

* remove cpu info log
2025-11-17 16:13:03 -08:00
Daniel Hiltgen
2f36d769aa bring back sysfs based VRAM information for AMD (#12871)
* build: optimize dockerfile context for iterating

This moves the copy of the source into the layer AFTER
doing software installs so we don't have to go through
the RPM install for cuda, etc. every time you touch a
source file.

* amd: implement linux sysfs based VRAM lookup

This adds a C++ implementation of sysfs DRM VRAM discovery
for more accurate free VRAM data on linux for AMD GPUs.
2025-11-17 15:40:58 -08:00
Daniel Hiltgen
399eacf486 ci: fix missing vulkan binaries in linux bundles (#13123) 2025-11-17 15:39:59 -08:00
Eva H
231cc878cb app/ui: fix to point ollama client to ui backend in dev mode (#13079) 2025-11-17 12:58:35 -05:00
Jeffrey Morgan
aa676b313f docs: link to ollama.com instead of hardcoding list of cloud models (#13110) 2025-11-16 20:56:09 -08:00
omahs
dd0ed0ef17 docs: fix typos in repository documentation (#10683) 2025-11-15 20:22:29 -08:00
Joel Bryan Juliano
d5649821ae readme: add Kdeps to community integrations (#11877)
Kdeps is an AI framework for building Dockerized full-stack AI
applications declaratively and uses Ollama LLM models on the
backend
2025-11-15 19:19:03 -08:00
pierwill
4cea757e70 server: clean up manifest documentation (#12995)
Co-authored-by: pierwill <pierwill@users.noreply.github.com>
2025-11-15 19:13:15 -08:00
Vignesh Skanda
a751bc159c llama: test case typo and readability improvements (#13078) 2025-11-15 18:54:27 -08:00
Laurențiu Nicola
5d31242fbf discover: fix typos in runner.go (#13096) 2025-11-15 18:52:54 -08:00
Patrick Devine
d7fd72193f tests: basic benchmarking test framework (#12964)
This change adds a basic benchmarking test framework for Ollama which can
be used to determine the prefill, eval, load duration, and total duration
for running a given model or models.
2025-11-15 18:17:40 -08:00
Daniel Hiltgen
72ff5b9d8c log: warn if user overrides detected (#13088)
Many failed GPU discovery issues recently can be traced to incorrect override settings.
This extra logging should help quickly spot these and guide users to try unsetting them first.
2025-11-14 14:36:28 -08:00
Parth Sareen
ce29f695b4 docs: add logprobs to openapi (#13090) 2025-11-14 14:14:58 -08:00
likelovewant
1ce7433505 Merge branch 'ollama:main' into main 2025-11-14 16:46:05 +08:00
Michael Yang
12b174b10e fix tensor merge (#13053) 2025-11-13 15:32:34 -08:00
Michael Yang
333203d871 chore: update models to use slice/chunk/chunksections (#12934)
* use slice/chunks

* bert

* llama4

* gemma3n

* gptoss

* mistral3

* qwen3vl

* qwen25vl

* deepseek2

* remove unused ops
2025-11-13 15:20:12 -08:00
Parth Sareen
c114987523 logprob: add bytes to logprobs (#13068) 2025-11-13 13:49:25 -08:00
Michael Yang
b48083f33f ml: add slice operation (#12870)
* slice

* chunk, chunksections
2025-11-13 13:28:21 -08:00
nicole pardal
482bec824f embeddings: added cli command to embedding docs (#12993) 2025-11-13 13:24:13 -08:00
Kowyo
684a9a8c5a docs: fix typo (VSCode -> VS Code) (#13072) 2025-11-12 20:49:33 -08:00
Jeffrey Morgan
54a76d3773 app: remove source code for previous JavaScript-based macOS app (#13067)
The code in this directory has been replaced with the
new Go version in the 'app' directory.
2025-11-12 20:37:43 -08:00
Radhi
8a75d8b015 readme: add AI UI to community integrations (#13035) 2025-11-12 17:08:50 -08:00
Jeffrey Morgan
f206357412 readme: fix incorrect header in community integrations (#13065) 2025-11-12 17:00:16 -08:00
Daniel Hiltgen
8224cd9063 ci: fix win vulkan (#13062) 2025-11-12 10:32:24 -08:00
Daniel Hiltgen
6286d9a3a5 Enable Vulkan with a temporary opt-in setting (#12931)
* docs: vulkan information

* Revert "CI: Set up temporary opt-out Vulkan support (#12614)"

This reverts commit 8b6e5baee7.

* vulkan: temporary opt-in for Vulkan support

Revert this once we're ready to enable by default.

* win: add vulkan CI build
2025-11-12 08:40:38 -08:00
Daniel Hiltgen
3a9e8e9fd4 vulkan: temporary cary of vulkan fixes (#12971)
This should be reverted once we update ggml past b6897
2025-11-12 08:31:40 -08:00
Jeffrey Morgan
cb1cb06478 docs: rename api-reference.md back to api.md since redirect stopped working (#13056) 2025-11-11 15:53:06 -08:00
Jeffrey Morgan
2d5e066c8c docs: fix openapi.yaml warnings, rename api.md to api-reference.md (#12904) 2025-11-11 15:39:35 -08:00
Bruce MacDonald
15968714bd docs/openapi: document that delete and copy responses are empty (#13055)
Some route endpoints return an empty response with a 200 OK. These should be documented in the OpenAPI doc. Note that the previous deletion response was not correct.
2025-11-11 15:07:21 -08:00
Jesse Gross
8bf38552de llm: Prefer dedicated GPUs over iGPUs when allocating memory
We currently assign model layers to GPUs according to free VRAM,
which assumes that GPU performance is roughly equal. This does not
work well for mixed dGPU and iGPU systems because iGPUs typically
use system memory which is large but their performance is slow.
This instead assigns layers to dGPUs first and then iGPUs.

In the future, this could be generalized to have a more fine grained
notion of GPU performance but dGPU vs. iGPU performance is the most
extreme.
2025-11-11 13:11:08 -08:00
Jesse Gross
b13fbad0fe llm: Separate llamaServer and ollamaServer code paths
Originally, llamaServer represented old memory estimates, which
could be used with either the old or new engine. ollamaServer was
used only for the new estimates and new engine. Since these
implementations did not map directly to engine, there was engine-
specific code in common code paths.

Now that new estimates are always used for the new engine, there is
a direct mapping between server type and engine. This separates out
most of the engine-specific code into the correct implementation
to make things easier to understand.
2025-11-11 13:11:08 -08:00
Jesse Gross
f560bd077f llm: Use Ollama engine memory layouts for both old and new engines
Currently for both the old and new engines, there is code to
calculate how much memory is required for a model and lay out
the layers onto GPUs. This reuses the new engine's lay out code
for the old engine as well, bringing them closer together. The
old engine continues to use its current method of estimating
required memory.

This reduces maintainence effort and improves consistency, as new
features only need to be implemented in one place. The newer code
is also more accurate, especially with multiple GPUs.
2025-11-11 13:11:08 -08:00
Jesse Gross
4372d0bfef llamarunner: Respect device ordering for offloaded layers
We used to control the way that llama.cpp saw devices using
CUDA_VISIBLE_DEVICES or similar. This would ensure that the layers
offloaded to a device were actually the ones intended. This is
particularly important because we might reorder devices based on
free memory or performance.

When we started explicitly scheduling layers, this logic went
away but the llamarunner didn't have any way to set the correct
order of devices. This meant that the correct number of layers
would be assigned to a device but not necessarily the layers
that were expected. This change sets up the devices correctly
based on the offload information.
2025-11-11 13:11:08 -08:00
Eva H
31361c4d3c app/ui: do not send thinking to prevent errors with cloud provider 2025-11-11 16:09:24 -05:00
Baptiste Jamin
59241c5bee server: add logprobs and top_logprobs support to Ollama's API (#12899)
Adds logprobs support to Ollama's API including support for Ollama's
OpenAI-compatible API. By specifying the new 'logprobs' boolean parameter
in the API, Ollama will return the log probabilities for each token generated.
'top_logprobs', an integer value can also be specified up to the value 20.
When specified, the API will also provide the number of most likely tokens to
return at each token position

Co-authored-by: Baptiste Jamin <baptiste@crisp.chat>
2025-11-11 08:49:50 -08:00
Eva Ho
2a9b61f099 address comment 2025-11-11 08:58:55 -05:00
Sheikh
6df4208836 docs: fix metal gpu section header (#13045) 2025-11-10 21:51:22 -08:00
Eva Ho
9d615cdaa0 fix test 2025-11-10 20:13:50 -05:00
Eva Ho
6a818b8a09 clean up 2025-11-10 19:08:42 -05:00
Eva Ho
2aaf29acb5 app/ui: do not send to prevent errors with cloud provider 2025-11-10 19:05:00 -05:00
Eva H
a42f826acb app/ui: using streamdown AI elements for markdown rendering 2025-11-10 12:05:59 -05:00
Bruce MacDonald
e10a3533a5 app/docs: remove out of date storybook instructions (#13006) 2025-11-08 13:28:18 -08:00
likelovewant
5580fe292c remove unnecessary files 2025-11-08 17:41:16 +08:00
likelovewant
bbadbed1e8 fix merge conflict 2025-11-08 17:36:08 +08:00
Patrick Devine
91ec3ddbeb bugfix: don't include both consolidated.safetensors and model-*.safetensors (#13010) 2025-11-07 22:41:57 -08:00
Parth Sareen
755ac3b069 docs: update n8n URL for Ollama (#12994) 2025-11-07 20:07:26 -08:00
Daniel Hiltgen
60b8973559 doc: re-add login autostart faq and GPU updates (#12975)
* doc: re-add login autostart faq

This appears to have been accidentally dropped during the doc migration.

* docs: GPU updates lost on the doc update

* review comments: improve windows login disable instructions
2025-11-07 11:21:44 -08:00
Tomoya Fujita
d2ef679d42 docs: fix 404 link to modelfile documentation (#12996) 2025-11-07 10:06:46 -08:00
Thomas Stocker
d4e0da0890 Remove unnecessary MacOs 13 and lower Patches (#12656)
* Remove unnecessary macos 13 Patch

* Remove unnecessary MacOs Version Guard patch

* rename patchesw

* remove again macos13 patch

* rename files
2025-11-06 15:52:56 -08:00
Jeffrey Morgan
565b802a6b openai: fix tool call ID mapping (#12988) 2025-11-06 15:26:25 -08:00
Saifeddine ALOUI
6c79e6c09a readme: add security tools section and Ollama fortress to community integrations (#12981) 2025-11-06 15:21:13 -08:00
breatn
780762f9d2 server: fix duplicate 'is' typo in comment (#12985) 2025-11-06 14:44:44 -08:00
Jeffrey Morgan
30fcc71983 api: add omitempty to required tool function parameter type (#12989) 2025-11-06 14:08:55 -08:00
Eva Ho
3501a4bdf9 address comment 2025-11-06 16:49:22 -05:00
Eva H
73a0cafc1e Merge pull request #12973 from macarronesc/main
feat: add support for WebP images in Ollama's app
2025-11-06 16:31:46 -05:00
Eva Ho
e309c80474 address comments 2025-11-06 13:49:59 -05:00
Daniel Hiltgen
544b6739dd ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
Daniel Alejandro Coll Tejeda
a4a53692f8 refactor: remove GIF support from image validation tests and logging 2025-11-06 09:09:51 +00:00
7394112478
c4ba257c64 readme: remove 404 link (#11351) 2025-11-05 23:36:59 -08:00
mags0ft
342e58ce4f readme: add hle-eval-ollama to list of terminal community integrations (#11371) 2025-11-05 23:04:30 -08:00
Saifeddine ALOUI
47b2585cfd readme: add lollms and lollms WebUI to community integrations (#11981) 2025-11-05 22:48:43 -08:00
Vincent Koc
4111db013f app: fix macOS file picker to support Uniform Type Identifiers (#12965) 2025-11-05 21:37:17 -08:00
Eva Ho
536c987c39 address comment 2025-11-05 20:19:34 -05:00
Eva Ho
a534d4e9e1 fixing thinking not scrolling issue 2025-11-05 16:06:55 -05:00
Eva Ho
74586aa9df address comments 2025-11-05 16:06:55 -05:00
Eva Ho
8c74f5ddfd ui: using streamdown AI elements for markdown rendering 2025-11-05 16:06:55 -05:00
Daniel Hiltgen
80d34260ea ci: re-enable signing (#12974) 2025-11-05 12:33:01 -08:00
Daniel Alejandro Coll Tejeda
bddfa2100f feat: add support for WebP images in Ollama's app 2025-11-05 21:23:20 +01:00
nicole pardal
1ca608bcd1 embeddings: added embedding command for cl (#12795)
Co-authored-by: A-Akhil <akhilrahul70@gmail.com>

This PR introduces a new ollama embed command that allows users to generate embeddings directly from the command line.

Added ollama embed MODEL [TEXT...] command for generating text embeddings
Supports both direct text arguments and stdin piping for scripted workflows

Outputs embeddings as JSON arrays (one per line)
2025-11-05 11:58:03 -08:00
Daniel Hiltgen
6aa7283076 mac: fix stale VRAM data (#12972)
The scheduler updates free VRAM based on current loaded models.  This was
mutating the persisted list of GPUs, and when coupled with the non-refreshing
logic for Metal that lead to stale low VRAM reporting after unload.  The fix is
to make sure the GPU discovery always returns a copy so the schedulers GPU list
is in fact ephemeral and doesn't leak any temporary adjustments back into the
persistent list.
2025-11-05 11:55:17 -08:00
Patrick Devine
f89fc1cadd bugfix: show connection string for interactive cli usage (#12930) 2025-11-05 11:55:04 -08:00
Daniel Hiltgen
97e05d2a6b win: revert CPU discovery logic to 0.12.3 (#12969)
The behavior change in 0.12.4 is the most likely the root cause of hangs some
users are seeing.  This reverts to the 0.12.3 code, with some added trace
logging.
2025-11-05 10:32:38 -08:00
Youdon
8bbc7395db readme: Add handy-ollama to community integrations (#8601) 2025-11-05 09:56:14 -08:00
Daniel Hiltgen
408c2f99d0 log: trace logging for scheduler (#12961) 2025-11-05 08:12:15 -08:00
Grace
809b9c68fa Add Tool Call ID (#12956)
* routes/types: add tool call id

---------

Co-authored-by: ParthSareen <parth.sareen@ollama.com>
2025-11-04 16:43:33 -08:00
Daniel Hiltgen
ba8c035846 log: instrument CPU discovery timing (#12960) 2025-11-04 16:23:37 -08:00
Daniel Hiltgen
27f1fde413 discovery: only retry AMD GPUs (#12894)
* discovery: only retry AMD GPUs

CUDA and Vulkan don't crash on unsupported devices, so retry isn't necessary.
This also refactors the code to shift the Library specific logic into the ml
package.

* review comments
2025-11-04 15:33:46 -08:00
virajwad
220e133fca vulkan: Add memory detection for Intel GPU using DXGI+PDH (#12664)
* PDH free memory skeleton

* Add PDH printing

* Add LUID support for Vulkan

* wire luid from ggml-vulkan to mem-dxgi-pdh file

* Fix to ggml-impl

* Continue skeleton

* Implemented ggml_dxgi_pdh_get_device_memory

* fix comments

* Fix - change value GB to bytes

* add ifdefs to only support windows and not linux

* modify error codes

* Finished ggml_dxgi_pdh_init() function

* completed ggml_dxgi_pdh_release()

* Formatting changes, add static to functions

* fix build errors

* fix go build error

* fix luid - now should match between dxgi and vulkan

* Fix the free memory reporting (was using copy by value, change to reference)

* keep only dxgi1_2.h

* Modifications based on PR feedback

* fix merge conflicts (2) and fix desc1.description printout

* move dxgi + pdh api calls to before the vendor specific library calls

* change from 3 samples to 1 sample for PDH

* modify when old_mode is set

* add fix for building MacOS

* fix release and returns for other vendors

* add patch file
2025-11-04 14:11:55 -08:00
Daniel Hiltgen
d3b4b9970a app: add code for macOS and Windows apps under 'app' (#12933)
* app: add code for macOS and Windows apps under 'app'

* app: add readme

* app: windows and linux only for now

* ci: fix ui CI validation

---------

Co-authored-by: jmorganca <jmorganca@gmail.com>
2025-11-04 11:40:17 -08:00
Daniel Hiltgen
a4770107a6 vulkan: enable flash attention (#12937)
Also adjusts the vulkan windows build pattern to match recent changes in other backends
so incremental builds are faster.
2025-11-04 10:31:22 -08:00
Jesse Gross
ef549d513c ggml: Increase maximum graph size
The initial implementation of qwen3-vl:235b exceeded the maximum graph
size based on the number of tensors. Although this was later fixed
through the use of the mrope operation, we are close to the limit in
some cases. This updates to track the current llama.cpp usage of GGML.
2025-11-03 16:05:37 -08:00
Rajath Bail
d2158ca6f4 readme: add Hillnote to community integrations (#12929) 2025-11-03 12:55:04 -08:00
Michael Yang
ce3eb0a315 chore(gptoss): cleanup dead code (#12932) 2025-11-03 11:27:15 -08:00
Ryan Coleman
60829f7ec6 readme: add Strands Agents to community integrations (#11740) 2025-11-02 16:01:28 -08:00
Attogram Project
9a50fd584c readme: add Ollama Bash Lib to community integrations (#12235) 2025-11-02 15:44:56 -08:00
Jesse Gross
392a270261 ggml: Avoid cudaMemsetAsync during memory fitting
We pass invalid pointers when we check the size of the required
compute graph before fitting. Some CUDA APIs validate these pointers
but we can just skip them during this phase. cudaMemsetAsync is one
of these that we weren't skipping but never took the code path that
used it. Now that we have enabled op_offload, we can hit it in
memory pressured situations.
2025-10-31 15:23:28 -07:00
Daniel Hiltgen
3bee3af6ed cpu: always ensure LibOllamaPath included (#12890)
In CPU only setups the LibOllamaPath was omitted causing
us not to load the ggml-cpu-XXX libraries during inference.
2025-10-31 14:37:29 -07:00
Daniel Hiltgen
83537993d7 logs: catch rocm errors (#12888)
This will help bubble up more crash errors
2025-10-31 09:54:25 -07:00
likelovewant
0f03e90f89 fix CMakePresets.json 2025-10-31 15:46:38 +08:00
likelovewant
58b175b6b1 fix CMakePresets.json 2025-10-31 15:45:07 +08:00
likelovewant
c9f83eaa98 Merge branch 'ollama:main' into main 2025-10-31 14:12:02 +08:00
nicole pardal
7dd4862a89 embeddings: removed redundant TestAPIEmbeddings test (#12863)
This PR removes a redundant test from TestAPIEmbeddings
Contents of this test already exists in embed_test.go and model_arch_test.go
2025-10-30 17:12:33 -07:00
Daniel Hiltgen
db973c8fc2 win: avoid ID mixups on refresh (#12869)
On Windows AMD IDs are numeric, and can reorder based on the filter environment.
By passing in the filter env on a full discovery refresh, we'll only look at the actual devices
and ignore unsupported iGPUs.  Without this, on some systems iGPU VRAM was incorrectly
being used to populate the dGPU.
2025-10-30 15:12:14 -07:00
Jesse Gross
afaf7ce8c3 ggml: Enable op_offload to improve partial offload performance
When a model is partially offloaded to system RAM, we can either
do the calculations on the CPU or we can temporarily transfer the
data to the GPU to do the calculations there. Small batches tend
to be better on the CPU, large batches on the GPU.

The llamarunner used the GPU in most cases and the ollamarunner
used the CPU. Although the ollamarunner saw an improvement in
token generation performance, there was a large performance hit
in prompt processing (3-10x).

There is an existing heuristic to dynamically switch between these
two modes but in practice it doesn't have enough information to
accurately make that decision. This adds authoritative data to make
the check work to get the best of both worlds.

Fixes #12037
2025-10-30 13:53:10 -07:00
Jesse Gross
26465fb85f ollamarunner: Worst case batch for token generation
We currently allocate the worst case batch for max sized
batches, which corresponds to prompt processing. However,
there are some cases where the generated graph is different
for small and large batches. To ensure that we don't need
to allocate memory later after layout has taken place, we
should run the worst case batch both ways and take the larger
amount of memory.

This does not noticeably affect loading speed as the most expensive
part of this logic is from image processing and that does not
occur during token generation.
2025-10-30 13:53:10 -07:00
Daniel Hiltgen
88236bc05f win: use copy for subprocess logs (#12864)
windows gets confused when we try to hand the stderr file descriptor to the subprocess children.  This ensures the log output
always shows up.
2025-10-30 13:22:00 -07:00
Patrick Devine
76eb7d0fff testing: test more models with tool calling (#12867) 2025-10-30 13:19:21 -07:00
Michael Yang
f67a6df110 interleaved mrope (#12807)
* ml(ggml): mrope
* interleave mrope
2025-10-30 11:29:00 -07:00
Michael Yang
75e75d9afe qwen3vl: enable flash attention by default (#12862) 2025-10-30 10:51:37 -07:00
Michael Yang
ed78e127d0 fix(cmd): unload model before removal (#12832)
this change fixes two bugs with `ollama rm`:

1. before a model is removed, it will first be stopped. this only
   happens for the first argument and skipped for all other models
2. models are unloaded indiscriminately. this errors for cloud models
   and should be omitted
2025-10-30 10:41:49 -07:00
Michael Yang
d432ade714 fix: qwen2.5vl, qwen3vl composite image (#12841)
this change fixes images with an alpha channel by overlaying the image
onto a white background
2025-10-30 10:33:19 -07:00
Michael Yang
06b3422d5f tests: add tests and docs for commonly used ops (#12844)
* mulmat
* permute
2025-10-30 10:32:45 -07:00
Athiban Sharon
cbe1cf06c4 Update README.md (#12822)
Fixed broken docs links
2025-10-30 13:14:39 -04:00
likelovewant
51e1480751 Merge branch 'ollama:main' into main 2025-10-30 10:15:05 +08:00
Grace
0a2d92081b Removing whitespace between Thinking and Content in Qwen3VL (#12838)
Eats extra whitespace at the end/beginning of content
2025-10-29 15:14:28 -07:00
Daniel Hiltgen
c88647104d int: harden server lifecycle (#12835)
this should reduce zombies during integration runs
2025-10-29 11:50:56 -07:00
Patrick Devine
05aff4a4f1 tests: fix embeddinggemma integration test (#12830) 2025-10-29 11:07:28 -07:00
Michael Yang
0d140bd1af fix: conv2d bias (#12834) 2025-10-29 11:03:43 -07:00
Jeffrey Morgan
93e45f0f0d docs: temporarily restore api.md and cleanup docs paths (#12818) 2025-10-28 23:25:48 -07:00
Jeffrey Morgan
a342160803 docs: fix root api documentation page (#12813) 2025-10-28 19:17:54 -07:00
Jeffrey Morgan
f6c29409dc docs: add new cloud model + fix openai redirect (#12812) 2025-10-28 19:09:07 -07:00
Michael Yang
7d25b9e194 feat(model): add qwen3vl (#12665) 2025-10-28 17:39:47 -07:00
Patrick Devine
36d64fb531 embed: add distance correlation test for library embed models (#12796) 2025-10-28 16:57:27 -07:00
Parth Sareen
d828517e78 docs: update readme and links (#12809) 2025-10-28 16:20:02 -07:00
Daniel Hiltgen
14977a9350 Fix vulkan PCI ID and ID handling (#12775)
* Fix vulkan PCI ID and ID handling

Intel GPUs may not report PCI IDs which was leading to incorrect overlap
detection.  Switch to using the existing PCI IDs, however AMD GPUs claim not to
report PCI IDs, but actually do, so try anyway, as this is required for ADLX to
find the GPUs on Windows. Numeric IDs lead to scheduling problems, so this also
switches Vulkan to use UUID based IDs. The GPU discovery patches have been
squashed into a single patch to simplify future rebases.

* review comments
2025-10-28 15:15:35 -07:00
Patrick Devine
29f63f37c8 Revert "server: Consolidate embedding truncation in runner (#12730)" (#12810)
This reverts commit 5d347f6d6f.
2025-10-28 14:49:14 -07:00
Parth Sareen
3d99d9779a docs: add docs for docs.ollama.com (#12805) 2025-10-28 13:18:48 -07:00
Parth Sareen
6d02a43a75 docs: rename to mdx to setup docs site (#12804) 2025-10-28 13:04:31 -07:00
Parth Sareen
5483497d7a Revert "docs: add reference to docs.ollama.com (#12800)" (#12803)
This reverts commit 934dd9e196.
2025-10-28 12:52:49 -07:00
Parth Sareen
934dd9e196 docs: add reference to docs.ollama.com (#12800) 2025-10-28 12:44:02 -07:00
Michael Yang
1188f408dd s/From*Slice/From*s/ (#12255) 2025-10-28 12:08:49 -07:00
nicole pardal
15c7d30d9a embedding tests: added check against exact base64 string (#12790) 2025-10-28 10:37:20 -07:00
Devon Rifkin
9862317174 Merge pull request #12793 from ollama/drifkin/12792_renderer-parser-from
create: inherit FROM model's renderer/parser
2025-10-28 00:15:46 -07:00
Michael Yang
ec9eb28f4c gemma3: make embedding non-causal (#12297) 2025-10-27 19:54:08 -07:00
Devon Rifkin
1bdd816910 create: inherit FROM model's renderer/parser
On main, the `RENDERER` and `PARSER` fields from the `Modelfile` don't
get propagated to a new model created with a `req.From` parameter. This
is easily triggered via `ollama run qwen3-coder`, then running some save
command like `/save qwen3-coder-custom`.

Added a regression test for this, and then open the config for the
"from" model in order to use its renderer/parser as a default for the
new model. This will fix the CLI and also API-based creates.

Fixes: https://github.com/ollama/ollama/issues/12792
2025-10-27 15:14:19 -07:00
nicole pardal
5d347f6d6f server: Consolidate embedding truncation in runner (#12730)
Currently, checking the length of prompts for embeddings to ensure
they fit in the context window (and possible truncation) occurs in
two places - the Ollama server and runner. This can lead to
inconsistencies in both the checks and reported number of tokens
processed. Since we have to do this processing in the runner, this
consolidates all of the logic there.
2025-10-27 11:59:12 -07:00
Patrick Devine
b97eb2b858 cloud: set the proxy content-type to the same as local models (#12759) 2025-10-25 10:57:10 -07:00
Jesse Gross
ad6f6a1d29 llm: Change memory allocation backoff from exponential to incremental
If we create a memory layout that should fit based on report free VRAM
but allocation still fails, we start applying a backoff. This reduces
free VRAM by an exponential percentage (1%, 2%, 4%...). However, the
points chosen tend to be too dense at the beginning and too sparse at
the end. Therefore, this switches to an incremental backoff (10%, 20%,
30%...).
2025-10-23 12:58:31 -07:00
Vinh Nguyen
6723a40be6 readme: add VT Code project to terminal community integrations (#12749) 2025-10-23 12:29:50 -07:00
Daniel Hiltgen
3258a89b6e DRY out the runner lifecycle code (#12540)
* DRY out the runner lifecycle code

Now that discovery uses the runners as well, this unifies the runner spawning code
into a single place.  This also unifies GPU discovery types with the newer ml.DeviceInfo

* win: make incremental builds better

Place build artifacts in discrete directories so incremental builds don't have to start fresh

* Adjust sort order to consider iGPUs

* handle cpu inference oom scenarios

* review comments
2025-10-23 11:20:02 -07:00
Jesse Gross
1c093e97af kvcache: Remove special case for reservation mask
We currently short circuit generation of the cache mask and just
generate an empty tensor of the correct size. However, in some
cases, this can also skip a cast operation. This can result in the
worst case graph being not fully worst case.

We don't actually need the fast path for mask generation, so it's
better to just use the normal code path.
2025-10-22 17:38:04 -07:00
Jesse Gross
a8d9c2648e llamarunner: Record the time for all batches during prompt processing
Currently, we only record the time for the last batch when processing
the prompt. This results in unrealistically high numbers for the
old llama runner.

Before:
total duration:       31.273112939s
load duration:        4.97054657s
prompt eval count:    32768 token(s)
prompt eval duration: 235.137439ms
prompt eval rate:     139356.80 tokens/s
eval count:           1873 token(s)
eval duration:        18.173182374s
eval rate:            103.06 tokens/s

After:
total duration:       30.024798033s
load duration:        4.758588663s
prompt eval count:    32768 token(s)
prompt eval duration: 7.779621548s
prompt eval rate:     4212.03 tokens/s
eval count:           1769 token(s)
eval duration:        17.148014223s
eval rate:            103.16 tokens/s
2025-10-22 13:52:58 -07:00
frob
0334e67ffd tools: parse tool calls that don't conform to ("name": name, "arguments": args} (#12738) 2025-10-22 11:34:27 -07:00
nicole pardal
e0ead1adee embeddings: base64 encoding fix (#12715) 2025-10-22 11:27:44 -07:00
Patrick Devine
d515aed6c3 cloud: don't error sending empty messages (#12724) 2025-10-21 18:12:14 -07:00
likelovewant
7f551c41e7 Merge branch 'ollama:main' into main 2025-10-21 19:38:31 +08:00
Jeffrey Morgan
5fe7ba1b9b runner: always truncate embeddings requests (#12714) 2025-10-20 16:47:05 -07:00
Michael Yang
d2b63c19b3 fs(ggml): fill in arch prefix if necessary (#12646) 2025-10-20 16:42:18 -07:00
Jeffrey Morgan
94f110b35a model/parsers: remove warning for missing <think> tag for qwen3-vl (#12713) 2025-10-20 16:03:43 -07:00
Daniel Hiltgen
5d22953ba7 cuda: get driver version after props (#12707)
Users on Windows without GPUs are reporting errors relating to
cudaDriverGetVersion with the device set to -1.  This ensures we only grab the
driver once we're enumerating actual devices.
2025-10-20 10:57:27 -07:00
Daniel Hiltgen
d245dffed8 rocm: give it more time to bootstrap (#12681)
Some users are hitting timeouts.  We'd like to make this faster, but for now make sure we don't timeout too aggressively.
2025-10-20 09:43:05 -07:00
likelovewant
cb13784a11 merge update 2025-10-18 23:03:13 +08:00
Daniel Hiltgen
bc1a818fdc contiguous input per layer (#12686)
Co-authored-by: Michael Yang <git@mxy.ng>
2025-10-17 18:39:18 -07:00
Daniel Hiltgen
ba2253dc30 win: more verbose load failures (#12683)
When loading the dynamic libraries, if something goes wrong report some
details.  Unfortunately this wont explain which dependencies are missing,
but this breadcrumb in the logs should help us diagnose GPU discovery
failures.
2025-10-17 17:13:16 -07:00
Daniel Hiltgen
68e04c7ff8 test: harden scheduler tests (#12662)
* test: harden scheduler tests

This removes reschedDelay which was stale code, and adds
a new configurable timeout for the waitForVRAMRecovery so
tests can now set the timeout to be very short to avoid the
scheduler getting stuck and hitting a test timeout.

* test: tune tests for partial loads

Give stress tests more time when the model is split between CPU/GPU
2025-10-17 08:56:44 -07:00
Daniel Hiltgen
270679932f cuda: tidy up CC settings (#12668)
8.7 is Jetpack only, so no need on x86 builds
10.3 covers [G]B300
2025-10-16 16:39:30 -07:00
Jeffrey Morgan
65fb3ff49d renderers: add global flag for setting [img] tags (#12669)
Adds a temporary global flag to renderers that causes renderers to always
render images as [img]. In a follow up change, we will consider making this
the default, and this flag could eventually be removed
2025-10-16 16:37:32 -07:00
Grace
e2a0b24435 Grace/qwen3 thinking (#12647)
* changing initial status to take into consideration prefill

* Add seperate strings for content and thinking builder

* thinking tests

* remove white space from string before closing think tag
2025-10-16 15:29:41 -07:00
Daniel Hiltgen
1813ff85a0 cuda: bring back CC 5.2 (#12666)
Forward compat on the newer driver doesn't seem to be working.
This should get 5.2 working on newer drivers again.
2025-10-16 13:07:41 -07:00
Daniel Hiltgen
b531777a66 test: add a few missing embedding models (#12661) 2025-10-16 09:36:25 -07:00
Daniel Hiltgen
fe3ec8dbf0 Revert "Workaround broken NVIDIA iGPU free VRAM data (#12490)" (#12642)
The workaround has been moved into the underlying C++ code.

This reverts commit e4340667e3.
2025-10-16 09:09:48 -07:00
Thomas Stocker
c744134287 vulkan: Get FilterID from Backend for Vulkan (#12655)
* vulkan: Get FilterID from Backend for Vulkan

* Fixing patch
2025-10-16 09:07:35 -07:00
weedge
4be41d2d45 readme: add achatbot-go to community integrations (#12629) 2025-10-15 21:54:15 -07:00
zhetaicheleba
de670570c9 fs/ggml: fix function name in comment (#12630) 2025-10-15 21:53:38 -07:00
Devon Rifkin
201d93716e Merge pull request #12651 from ollama/drifkin/oai-conversion
openai: make tool call conversion fns public
2025-10-15 21:10:30 -07:00
Devon Rifkin
160cecc8e2 openai: make tool call conversion fns public 2025-10-15 20:54:58 -07:00
Daniel Hiltgen
8b6e5baee7 CI: Set up temporary opt-out Vulkan support (#12614)
Initially Vulkan support in Ollama will require building from source.  Once it is
more thoroughly tested and we have fixed any critical bugs, then we can
bundle Vulkan into the official binary releases.
2025-10-15 14:18:01 -07:00
Daniel Hiltgen
75d17fc6c2 perf: backport cuda iGPU sched spin (#12641) 2025-10-15 11:52:14 -07:00
Santosh Bhavani
8fafc8af77 ml/backend/ggml: NVML fallback for unified memory GPUs (#12619)
* Simplify NVML fallback for unified memory GPUs

Remove device-specific checks and environment variable dependency for
NVML_ERROR_NOT_SUPPORTED fallback. When NVML doesn't support memory
queries, unconditionally use /proc/meminfo instead of checking device
names or OLLAMA_UNIFIED_MEMORY environment variable.

This provides better memory reporting by using MemAvailable which
accounts for reclaimable memory, avoiding the underreporting issue
described in NVIDIA support article a_id/5728.

Tested on NVIDIA GB10 unified memory iGPU with consistent and accurate
memory reporting across multiple model load/unload cycles.

* Add NVML fallback patch for unified memory GPUs
2025-10-15 11:40:06 -07:00
Jesse Gross
c3c85aa06c llm: Enable flash attention by default for gemma3 2025-10-15 10:42:12 -07:00
Jeffrey Morgan
0d713051a2 envconfig: default to port 443 when connecting to ollama.com (#12617) 2025-10-14 23:38:24 -07:00
Parth Sareen
c4c5a4a01e types: send index for tool calls (#12625) 2025-10-14 19:35:15 -07:00
Jesse Gross
3dcfd5f69e llm: Perform eviction when num_gpu is set with new estimates
Currently, if you set num_gpu then this forces the model to
load with that number of layers in the current configuration.
This is done regardless of any other information, which means
that no eviction is performed even if another model is loaded.

This behavior is different from the old estimates (and still
happens for models that runs on the llama engine). In those
cases, models would be evicted if needed to load at the requested
number of layers. That behavior is more useful and less surprising,
so this changes the new estimates to match.

Fixes #12580
2025-10-14 17:46:36 -07:00
Devon Rifkin
53a969d509 Merge pull request #12621 from ollama/drifkin/any-of
qwen3-coder: support anyOf when parsing tool calls
2025-10-14 15:51:24 -07:00
Devon Rifkin
08fbb60bb2 qwen3-coder: support anyOf when parsing tool calls 2025-10-14 15:33:05 -07:00
Daniel Hiltgen
850da848c5 logs: fix bogus "0 MiB free" log line (#12590)
On the llama runner, after the recent GGML bump a new log line reports
incorrect 0 MiB free after our patch to remove memory from the props.  This
adjusts the llama.cpp code to fetch the actual free memory of the active device.
2025-10-14 11:26:28 -07:00
Thomas Stocker
2aba569a2a Vulkan based on #9650 (#11835)
* implement the vulkan C backend

* add support in gpu.go

* add support in gen_linux.sh

* it builds

* fix segfault

* fix compilation

* fix free memory monitor

* fix total memory monitor

* update gpu.go

* fix build

* fix check_perfmon len

* remove cap_get_bound check

* fix vulkan handle releasing

* fix build on federa 40

* fix vulkan on windows

* making amdgpu work on arm achitecutre with vulkan

* add x86_64 lines in VulkanGlobs and capLinuxGlobs

* add aarch64 lines in vulkanGlobs and capLinuxGlobs

* Fix variable name

* Add vulkan build patch from @jmorganca

* Sync vendored ggml to add Vulkan support

* Updated dockerfile

https://github.com/whyvl/ollama-vulkan/issues/7#issuecomment-2660836871

Signed-off-by: Vadim Grinco <vadim@grinco.eu>

* Installing rocm library

Signed-off-by: Vadim Grinco <vadim@grinco.eu>

* This version works well

built based on this: https://github.com/whyvl/ollama-vulkan/issues/7#issuecomment-2660836871

Signed-off-by: Vadim Grinco <vadim@grinco.eu>

* Applied 00-fix-vulkan-building.patch

Work done by McBane87 here: https://github.com/whyvl/ollama-vulkan/issues/7#issuecomment-2660836871

Signed-off-by: Vadim Grinco <vadim@grinco.eu>

* Fixed the "detached head" issues

Signed-off-by: Vadim Grinco <vadim@grinco.eu>

* Merged in the right direction

Signed-off-by: Vadim Grinco <vadim@grinco.eu>

* Merging the latest stable (#2)

* Applied 00-fix-vulkan-building.patch

* Implemented vulkan backend based on the work done by whyvl, Dts0, McBane87 and others

Tested on AMD Ryzen 7 8845HS w/ Radeon 780M Graphics with ROCm disabled

```
[GIN-debug] POST   /v1/chat/completions      --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST   /v1/completions           --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST   /v1/embeddings            --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models                --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET    /v1/models/:model         --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
time=2025-03-11T13:00:40.793Z level=INFO source=gpu.go:199 msg="vulkan: load libvulkan and libcap ok"
time=2025-03-11T13:00:40.877Z level=INFO source=gpu.go:421 msg="error looking up vulkan GPU memory" error="device is a CPU"
time=2025-03-11T13:00:40.878Z level=WARN source=amd_linux.go:443 msg="amdgpu detected, but no compatible rocm library found.  Either install rocm v6, or follow manual install instructions at https://github.com/ollama/ollama/blob/main/docs/linux.md#manual-install"
time=2025-03-11T13:00:40.878Z level=WARN source=amd_linux.go:348 msg="unable to verify rocm library: no suitable rocm found, falling back to CPU"
time=2025-03-11T13:00:40.879Z level=INFO source=types.go:137 msg="inference compute" id=0 library=vulkan variant="" compute=1.3 driver=1.3 name="AMD Radeon Graphics (RADV GFX1103_R1)" total="15.6 GiB" available="15.6 GiB"
```

```
 # ollama run phi4:14b
>>> /set verbose
Set 'verbose' mode.
>>> how's it going?
Hello! I'm here to help you with any questions or tasks you have. How can I assist you today? 😊

total duration:       3.341959745s
load duration:        18.165612ms
prompt eval count:    15 token(s)
prompt eval duration: 475ms
prompt eval rate:     31.58 tokens/s
eval count:           26 token(s)
eval duration:        2.846s
eval rate:            9.14 tokens/s
>>>
```

* This is no longer needed

Signed-off-by: Vadim Grinco <vadim@grinco.eu>

* Fixes SIGSEGV: segmentation violation running gemma3 models on ollama 0.6.0 #21

Patch provided by McBane87 on https://github.com/whyvl/ollama-vulkan/issues/21

Signed-off-by: Vadim Grinco <vadim@grinco.eu>

* Applied 04-disable-mmap-vulkan.patch

From: https://github.com/whyvl/ollama-vulkan/issues/7#issuecomment-2660836871

Signed-off-by: Vadim Grinco <vadim@grinco.eu>

* Pulled new upstream code for ggml-bulkan backend

Signed-off-by: Vadim Grinco <vadim@grinco.eu>

* Merged latest ollama 0.6.2 and nasrally's Flash Attention patches (#5)

* readme: add Ellama to list of community integrations (#9800)

* readme: add screenpipe to community integrations (#9786)

* Add support for ROCm gfx1151 (#9773)

* conditionally enable parallel pipelines

* sample: make mutations in transforms explicit (#9743)

* updated minP to use early exit making use of sorted tokens

* ml/backend/ggml: allocate memory with malloc when loading model (#9822)

* runner: remove cache prompt flag from ollama runner (#9826)

We do not need to bypass the prompt caching in the ollama runner yet, as
only embedding models needed to bypass the prompt caching. When embedding
models are implemented they can skip initializing this cache completely.

* ollamarunner: Check for minBatch of context space when shifting

Models can specify that a group of inputs need to be handled a single
batch. However, context shifting didn't respect this and could trigger
a break anyways. In this case, we should instead trigger a context
shift earlier so that it occurs before the grouped batch.

Note that there still some corner cases:
 - A long prompt that exceeds the context window can get truncated
   in the middle of an image. With the current models, this will
   result in the model not recognizing the image at all, which is
   pretty much the expected result with truncation.
 - The context window is set less than the minimum batch size. The
   only solution to this is to refuse to load the model with these
   settings. However, this can never occur with current models and
   default settings.

Since users are unlikely to run into these scenarios, fixing them is
left as a follow up.

* Applied latest patches from McBane87

See this for details: https://github.com/whyvl/ollama-vulkan/issues/7#issuecomment-2708820861

Signed-off-by: Vadim Grinco <vadim@grinco.eu>

* Add ability to enable flash attention on vulkan (#4)

* discover: add flash attention handling for vulkan
* envconfig: fix typo in config.go

As part of the process some code was refactored and I added a new field
FlashAttention to GpuInfo since the previous solution didn't allow for a
granular check via vulkan extensions. As a side effect, this now allows
for granular per-device FA support checking in other places

---------

Signed-off-by: Vadim Grinco <vadim@grinco.eu>
Co-authored-by: zeo <108888572+zeozeozeo@users.noreply.github.com>
Co-authored-by: Louis Beaumont <louis.beaumont@gmail.com>
Co-authored-by: Daniel Hiltgen <dhiltgen@users.noreply.github.com>
Co-authored-by: Michael Yang <mxyng@pm.me>
Co-authored-by: Parth Sareen <parth.sareen@ollama.com>
Co-authored-by: Jeffrey Morgan <jmorganca@gmail.com>
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Nikita <50599445+nasrally@users.noreply.github.com>

* Revert Readme changes

* Revert

* Revert changes in amd_linux.go

* Revert changes in amd_linux.go

* Remove flashattention setting gpu.go

* Revert whitespace changes in gpu.go

* Revert changes in transforms_test.go

* Revert changes in runner.go

* Revert changes in Makefile.sync

* Revert some unintented changes in Dockerfile

* Revert vulkan copy changes in Dockerfile

* Update Vulkan Code to de4c07f93783a1a96456a44dc16b9db538ee1618

* Fixed duplicate sync in ggml.go

* Revert changes in ggml.go

* Revert chnages in ggml.go

* enable falsh attention on vulkan

* revert remove parenthesis

* fixed flash attention logic enabling

* vk_check_flash_attention 0 means supported

* Update gpu.go

* Add vulkan to Windows Build script

* Remove commented out code

* Enable Vulkan Flash attention in FlashAttentionSupported

* Fix logging

* Update Vulkan backend to e54d41befcc1575f4c898c5ff4ef43970cead75f

* Removed libcap related code

libcap is not directly related to Vulkan and should be added by its own PR. It adds additional library dependencies for building and also requires users to run setcap or run ollama as root, which is not ideal for easy use

* Fix Unit Test (Add Vulkan Library)

* Add vulkan to TestHomogeneousGPUs
Test

* vulkan: get GPU ID (ollama v0.11.5)

Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>

* disable mmap for vulkan

* Reduce Changes remove TestHomogeneousGPUs (doesn't exist on master)

* Update vulkan version to the version used in llama.cpp

* rename gpu patch to correct number

* added Vulkan API to get correct Device UUID

current UUID from pipelineCacheUUID does not match CUDA

* Fix GPU ID Patch

* Remove Code not in llama.cpp

* modified UUID code inside ggml

* Fix Patch

* Copied minimal definition from vulkan header

* Fix compile error in Mac

Metal is preferred so we're disabling Vulkan for now

* Removed unused code

Fix linter error in CI

* Fix patches apply

* fixing lint error

* Removed unneeded function call

Somehow removing this call fixed the crashing when Vulkan header was removed

* added missing NL

* Fixed missing members in Vulkan header

also added zero clear for some structs

* Fixed wrong structure ID

* Fixed Vulkan header

More aligned with official header definition now

* buildvulkanAsSeperateFunction

* Vulkan on Windows Test

* temporarly comment out gate to run windows task

* use temporarly windows-latest for build

* Commenting out other presets to build vulkan

* reenable cpu

* commenting out error action stop

* temporarly commenting out rocm

* set vulkan path

* comment out cude for faster turnaround

* correct vulkan install

* correct vulkan silent install

* fixed install command

* revert debugging changes (vulkan builds on windows)

* revert windows-latest

* trying to build vulkan for linux

* temporarly disable cuda and rocm

* try again linux build

* fix version

* trying to fix

* trying again

* trying again

* fix version

* fixed vulkan-sdk name

* try again

* trying again

* try without version number

* try again

* add some more extra

* trying to use version 1.4.313

* revert debugging changes

* Filter out already supported gpus

* revert debug code

* Use runners for GPU discovery

This revamps how we discover GPUs in the system by leveraging the Ollama
runner.  This should eliminate inconsistency between our GPU discovery and the
runners capabilities at runtime, particularly for cases where we try to filter
out unsupported GPUs.  Now the runner does that implicitly based on the actual
device list.  In some cases free VRAM reporting can be unreliable which can
leaad to scheduling mistakes, so this also includes a patch to leverage more
reliable VRAM reporting libraries if available.

Automatic workarounds have been removed as only one GPU leveraged this, which
is now documented. This GPU will soon fall off the support matrix with the next
ROCm bump.

Additional cleanup of the scheduler and discovery packages can be done in the
future once we have switched on the new memory management code, and removed
support for the llama runner.

* timing info for runner

* WIP - wire up Vulkan with the new engine based discovery

Not a complete implementation - free VRAM is better, but not accurate on
windows

* fix - trust the library paths from discovery when starting runner

* fix index bug

* fix vulkan ids to be underlying

* fix - give bootstrapping more time on slow systems

* Test if Vulkan device is supported

* vk_check_flash_attention is not needed (coompat2 coopmapt and scalar implementation exist)

* Handle GGML_VK_VISIBLE_DEVICES

* ask for supported first

* win: fix CPU query buffer handling

Try in a short loop until we get the size right.

* test: harden integration tests for slow start

If the server takes a while to start up, block
tests from starting until it's online to avoid
setting large timeouts in individual test cases.

* gofumpt fix

* fix build

* merge fixes

* merge fixes

* fixed build

* merge fixes

* fixing build

* fixed build

* fixed formatting

* fixed build

* fix vulkan gpu id patch

* sync llama.cpp vulkan code

* update build windows script

* merge fixes

* fix format

* fixed vulkan casing

* handle igpu as gpu

* improve case

* print out unknown library

* rturn Vulkan for vulkan library

* Revert "rturn Vulkan for vulkan library"

This reverts commit 690461a12fd5e93295d174c97edefb2bc33285b1.

* fixed patch number

* return Library Name

* remvoe debug code

* return integrated in vulkan backend

* Return pci Properties

* update patch

* directly get pci proeprties without parsing

* workaround for filtering devices. Correct way is to have a LibraryPosition Parameter in the deviceInfo

* Revert "directly get pci proeprties without parsing"

This reverts commit 8e0624851f5ed7d9f74518f574dfb422e4dd4dc2.

* Set FilteredID for Environment Filtering

* ROCm Library is named ROCm

* revert changes in patch

* Create 0028-vulkan-pci-and-memory.patch

* vulkan memory patch

* casing fix

* Add more pci properties

* Added better memory management

* Added better memory managament

* fixed patch

* Fixed patch

* FilterID creation group by library

* filter out vulkan supported by other gpu

* fixing deviceid compare

* Vulkan Fix FA coopmat1 invalid array indexing

* Use everywhere the same Vulkan Version 1.4.321.1

* Remove unneeded patch

* vulkan update

* sync vulkan glsl files

* only use for vulkan the filteredid (numeric device number)

* simplify code

---------

Signed-off-by: Vadim Grinco <vadim@grinco.eu>
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
Co-authored-by: pufferffish <github@bandersnatch.anonaddy.com>
Co-authored-by: KOISHI KOMEIJI FROM TOUHOU 11 <fuck>
Co-authored-by: DSLstandard <qgeneral35@gmail.com>
Co-authored-by: pufferffish <me@windtfw.com>
Co-authored-by: yeongbba <yeongmo.lee@logpresso.com>
Co-authored-by: tomaThomas <tomathomas@mailbox.org>
Co-authored-by: Antoine Viallon <antoine@lesviallon.fr>
Co-authored-by: Vadim Grinco <vadim@grinco.eu>
Co-authored-by: zeo <108888572+zeozeozeo@users.noreply.github.com>
Co-authored-by: Louis Beaumont <louis.beaumont@gmail.com>
Co-authored-by: Daniel Hiltgen <dhiltgen@users.noreply.github.com>
Co-authored-by: Michael Yang <mxyng@pm.me>
Co-authored-by: Parth Sareen <parth.sareen@ollama.com>
Co-authored-by: Jeffrey Morgan <jmorganca@gmail.com>
Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Nikita <50599445+nasrally@users.noreply.github.com>
Co-authored-by: Masato Nakasaka <masato.nakasaka@intel.com>
Co-authored-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
2025-10-14 10:59:58 -07:00
Devon Rifkin
fd8aa947f3 Merge pull request #12562 from ollama/drifkin/registries
add registries for parsers/renderers
2025-10-14 02:01:53 -07:00
Devon Rifkin
ddaca643d0 add registries for parsers/renderers 2025-10-14 01:13:54 -07:00
Grace
05982a95cb Qwen3VL Cloud Parser and Renderer (#12526)
* working (other than tool call is the incorrect order) for tool calls and tools

* Tests work, other than image tags (tests do not go through server) and tools (not in the correct order, but contents are the same)

* testing for qwen3vl parser - toolparser is working

* made changes to JSON tool parser, wraps the TollCallFunction with a TollCall object

* Working parser for thinking models - assumes state of thinking, emits unambiguous content in thinking, does not call tool call in thinking

* changed the parser to start with collecting content

* thinking prefill

* add hasThinkingSupport parameter to parser

* qwen3-vl -> qwen3-vl-instruct for renderer/parser

* Add hasThinkingSupport=false to QwenVLParser

---------

Co-authored-by: Devon Rifkin <drifkin@drifkin.net>
2025-10-13 16:52:33 -07:00
Gabe Goodhart
4987f13d34 Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552)
* feat: Bump llama.cpp to df1b612

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(mtmd): Correctly encode text chunks during mtmd tokenization

There can be text chunks that appear interspersed with the image embeddings
that contain template delimiter tokens for some models. These need to be
correctly translated to text tokens.

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* tests: Use MtmdChunk in image_test

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* style: Fix unnecessary conversion linting

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix(ggml): Revert changes to ggml_hip.cpp

These changes were done largely by our code assistant and are likely wrong

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Revert changes in mem_nvml.cpp

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Update sync point to 1deee0

This brings in several more optimization commits and model support for
EmbeddingGemma

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Update patches for 1deee0

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: sync for bump to 1deee0

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Bad patch updates with errant `+`

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Bump llama.cpp/ggml to 7049736

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: format-patches after latest bump

Branch: LlamaCPPBump-GraniteDocling

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

---------

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-10-13 15:26:18 -07:00
Jeffrey Morgan
e638f2acb6 runner: fix shifting on llama runner (#12604) 2025-10-13 13:46:33 -07:00
Michael Yang
18087f2ec7 Revert "use llama runner for qwen3 (#12556)"
This reverts commit 3d32249c74.
2025-10-13 13:30:30 -07:00
Michael Yang
6c833d5f8d fix(qwen3): deepseek distill
deepseek's qwen3 distill uses a different rope scheme so support both
2025-10-13 13:30:30 -07:00
Jeffrey Morgan
6544e14735 Reapply "add truncate and shift parameters" (#12582) 2025-10-11 16:06:14 -07:00
Devon Rifkin
5db8a818a1 Merge pull request #12581 from ollama/drifkin/renderer-api-generate
routes: fix built-in renderers for `api/generate`
2025-10-11 14:10:23 -07:00
Devon Rifkin
6db8da9958 routes: fix built-in renderers for api/generate
Made it so when api/generate builds up a message array and generates the
prompt it now goes through the same function as `api/chat` for
consistency. This is where we hook the optional built-in renderers to
bypass templates, which was missing for `api/generate` before this
change.

Closes: #12578
2025-10-11 13:57:43 -07:00
frob
0c68ec8d6a discover: fix typo (#12565) 2025-10-11 12:06:02 -07:00
Daniel Hiltgen
70d9e363e1 doc: remove AMD EOL GPUs (#12567) 2025-10-10 17:16:29 -07:00
Michael Yang
1a2feb2a97 ollamarunner: fix deadlock
hardErrCh will deadlock since forwardBatch is blocked on
computeStartedCh which never gets sent. since the response to
hardErrCh is to panic, just panic instead
2025-10-10 16:49:57 -07:00
Daniel Hiltgen
aab2190420 implement nvml for linux (#12517)
* implement nvml for linux

* Improve scheduler logging when VRAM doesn't recover
2025-10-10 15:15:56 -07:00
Michael Yang
629db9dc43 comment split 2025-10-10 13:25:34 -07:00
Michael Yang
e0cd511661 fix test 2025-10-10 13:25:34 -07:00
Michael Yang
207332078f fix lint 2025-10-10 13:25:34 -07:00
Michael Yang
93085127f4 convert: slice gate_up weight 2025-10-10 13:25:34 -07:00
Michael Yang
c00fa9cc2b convert: split gate_up bias 2025-10-10 13:25:34 -07:00
yajianggroup
df411c4b02 refactor: using testing.B.Loop
Signed-off-by: yajianggroup <yajianggroup@outlook.com>
2025-10-10 13:25:29 -07:00
Jeffrey Morgan
3d32249c74 use llama runner for qwen3 (#12556) 2025-10-09 19:08:21 -07:00
Patrick Devine
d681cd7c29 thinking: allow "think": false for non-thinking models (#12555) 2025-10-09 18:46:00 -07:00
shengxinjing
47298fce39 refactor: use builtin max and min 2025-10-09 16:17:52 -07:00
shengxinjing
4a48937ef1 refactor: use builtin max and min 2025-10-09 16:17:52 -07:00
Michael Yang
967a82f52f ollamarunner: measure only active time 2025-10-09 15:44:04 -07:00
Michael Yang
bbbc73d637 llamarunner: update metrics
this change updates how metrics are collected. until now, performance
metrics, specifically initial input processing and subsequent generation
durations, were collected by taking the timestamp when creating a new
sequence, the first token generation, and completing generation. the
processing duration is taken as first token generation sub sequence
creation while generation is taken as completing generation sub first
token generation.

while this approach is an accurate end-to-end metric of processing and
generation, it's not comparable to other tools which only measure the
active, i.e. decode, duration.

this change updates the metrics to only capture decode duration so it
can be more directly compared to other tools
2025-10-09 15:44:04 -07:00
Daniel Hiltgen
15e3611d3d logs: quiet down context canceled on completion and scheduler noise (#12553)
* logs: quiet down context canceled on completion

If the client closes the connection before Completion finishes, we were
logging at error level implying the runner crashed which was misleading.

time=2025-10-08T22:59:20.566-07:00 level=ERROR source=server.go:1490 msg="post predict" error="Post \"http://127.0.0.1:57736/completion\": context canceled"

* quiet down scheduler log error on expected case

Since we don't hold the lock while performing memory load calculations, other
runners can unload in parallel, so finding no runner to unload is a valid scenario
which we shouldn't log at error level.
2025-10-09 10:37:47 -07:00
Parth Sareen
77060d462c routes: structured outputs for gpt-oss (#12460) 2025-10-08 19:13:38 -07:00
Patrick Devine
1b91d4dda1 openai: change the reasonin_effort field to also take none 2025-10-08 18:21:01 -07:00
Jeffrey Morgan
7d965258ce Revert "add truncate and shift parameters (#12519)" (#12545)
This reverts commit 6a62b894c7.
2025-10-08 17:57:57 -07:00
Jeffrey Morgan
6a62b894c7 add truncate and shift parameters (#12519) 2025-10-08 17:05:05 -07:00
Patrick Devine
90d429f5a8 thinking: turn on thinking mode for all reasoning models (#12533) 2025-10-08 16:50:13 -07:00
Jesse Gross
1fc35f1260 kvcache: Clean up sliding window state with independent batches
Sliding windows models (e.g. gpt-oss, gemma3) remove tokens that
are out of the cache's window each time we start a new forward pass.

The cache storage needs to handle the window size for each sequence
plus the batch size, since the batch needs to attend to the full
window size. This means that we have greater than a window size
stored while processing the batch.

When the next batch comes, we are currently only looking at the
sequences in the incoming batch to slide the window forward.
However, we also need to clean up the other sequences that might
be occupying space in the batch processing buffer to ensure each
sequence is only using its window size of storage. Failure to do
this can result in "no kv cache slot found" errors.

Fixes: #10127
2025-10-08 16:43:14 -07:00
Jesse Gross
aa45f7ce27 discover: Disable flash attention for Jetson Xavier (CC 7.2)
GGML picks the wrong kernel and these systems fail with:
Sep 28 22:25:39 xavier ollama[48999]: //ml/backend/ggml/ggml/src/ggml-cuda/fattn-wmma-f16.cu:437:
ERROR: CUDA kernel flash_attn_ext_f16 has no device code compatible with CUDA arch 720. ggml-cuda.cu
was compiled for: __CUDA_ARCH_LIST__

Fixes #12442
2025-10-08 09:56:15 -07:00
Daniel Hiltgen
4e5d862ec4 Integration test tuning (#12492)
Remove some flaky scenarios, and switch to chat for better reliability
2025-10-08 09:51:25 -07:00
Daniel Hiltgen
303be9304c docs: improve accuracy of LLM library docs (#12530) 2025-10-07 16:21:07 -07:00
Daniel Hiltgen
bd15eba4e4 Bring back escape valve for llm libraries and fix Jetpack6 crash (#12529)
* Bring back escape valve for llm libraries

If the new discovery logic picks the wrong library, this gives users the
ability to force a specific one using the same pattern as before. This
can also potentially speed up bootstrap discovery if one of the libraries
takes a long time to load and ultimately bind to no devices.  For example
unsupported AMD iGPUS can sometimes take a while to discover and rule out.

* Bypass extra discovery on jetpack systems

On at least Jetpack6, cuda_v12 appears to expose the iGPU, but crashes later on in
cublasInit so if we detect a Jetpack, short-circuit and use that variant.
2025-10-07 16:06:14 -07:00
Devon Rifkin
bc71278670 Merge pull request #12509 from ollama/drifkin/oai-compat-refactor
openai: refactor to split compat layer and middleware
2025-10-06 16:22:08 -07:00
Daniel Hiltgen
918231931c win: fix build script (#12513) 2025-10-06 14:46:45 -07:00
Daniel Hiltgen
04c1849878 discovery: prevent dup OLLAMA_LIBRARY_PATH (#12514)
This variable isn't currently documented or intended as something the user can
override, but if the user happens to set OLLAMA_LIBRARY_PATH we were doubling
this in the subprocess environment which will cause problems with the new
bootstrap discovery logic.
2025-10-06 14:36:44 -07:00
Devon Rifkin
2c2f4deaa9 openai: refactor to split compat layer and middleware
This makes the core openai compat layer independent of the middleware
that adapts it to our particular gin routes
2025-10-05 14:18:56 -07:00
Daniel Hiltgen
292767afb4 CI: fix win arm build (#12502)
Resolve subtle erroraction stickiness difference between x86 and arm builder setup
2025-10-04 11:46:45 -07:00
Daniel Hiltgen
ae5e0f0889 CI: replace clang compiler for windows (#12495) 2025-10-04 09:18:42 -07:00
Jesse Gross
19e6796eac llm: Support KV cache quantization with gpt-oss
With the new version of GGML in #12245, KV cache quantization
no longer causes a fallback to CPU.
2025-10-03 16:31:58 -07:00
Grace
33801c1597 Fixed Deepseek2 adding nil tensor error 2025-10-03 14:20:06 -07:00
Daniel Hiltgen
e4340667e3 Workaround broken NVIDIA iGPU free VRAM data (#12490)
The CUDA APIs for reporting free VRAM are useless on NVIDIA iGPU
systems as they only return the kernels actual free memory and ignore
buff/cache allocations which on a typical system will quickly fill up
most of the free system memory.  As a result, we incorrectly think
there's very little available for GPU allocations which is wrong.
2025-10-03 12:17:21 -07:00
Patrick Devine
2fa1e92a99 test: add template error test (#12489) 2025-10-03 12:05:34 -07:00
Daniel Hiltgen
07e36761c3 ci: place rocm windows in correct runner dir (#12487) 2025-10-03 07:28:40 -07:00
Daniel Hiltgen
c29fb007c0 CI: temporarily disable clang install (#12486)
This will likely yield builds that have problems with unicode characters
but at least we can start testing the release while we try to find an
alternate clang compiler for windows, or mingw ships a fixed version.
2025-10-02 20:31:18 -07:00
Daniel Hiltgen
730ed6e9e1 ci: fix windows build (#12485) 2025-10-02 19:16:01 -07:00
Daniel Hiltgen
dc06601677 ci: fix windows build (#12484) 2025-10-02 18:59:26 -07:00
Patrick Devine
1ed2881ef0 templates: fix crash in improperly defined templates (#12483) 2025-10-02 17:25:55 -07:00
Jesse Gross
0bda72892c llm: Enable flash attention by default for qwen3 and qwen3moe 2025-10-02 17:04:10 -07:00
Daniel Hiltgen
55ca827267 AMD: block running on unsupported gfx900/gfx906 (#12481) 2025-10-02 16:53:05 -07:00
Daniel Hiltgen
c68f367ef6 Update GGML to b6646 (#12245)
Notable EOLs with this change:
- MacOS v12 and v13 are no longer supported (v14+ required)
- AMD gfx900 and gfx906 are no longer supported
2025-10-02 14:47:10 -07:00
Jesse Gross
fdb109469f llm: Allow overriding flash attention setting
As we automatically enable flash attention for more models, there
are likely some cases where we get it wrong. This allows setting
OLLAMA_FLASH_ATTENTION=0 to disable it, even for models that usually
have flash attention.
2025-10-02 12:07:20 -07:00
Daniel Hiltgen
05a43e078a fix panic on bootstrapDevices (#12475)
Wrong index variable was used.
2025-10-01 17:39:29 -07:00
Daniel Hiltgen
bc8909fb38 Use runners for GPU discovery (#12090)
This revamps how we discover GPUs in the system by leveraging the Ollama
runner.  This should eliminate inconsistency between our GPU discovery and the
runners capabilities at runtime, particularly for cases where we try to filter
out unsupported GPUs.  Now the runner does that implicitly based on the actual
device list.  In some cases free VRAM reporting can be unreliable which can
leaad to scheduling mistakes, so this also includes a patch to leverage more
reliable VRAM reporting libraries if available.

Automatic workarounds have been removed as only one GPU leveraged this, which
is now documented. This GPU will soon fall off the support matrix with the next
ROCm bump.

Additional cleanup of the scheduler and discovery packages can be done in the
future once we have switched on the new memory management code, and removed
support for the llama runner.
2025-10-01 15:12:32 -07:00
Devon Rifkin
6b50f2b9cd Merge pull request #12461 from ollama/drifkin/qwen3-coder-tweaks
qwen3-coder: fix tool definition type rendering
2025-09-30 19:47:44 -07:00
Michael Yang
35ac4eb12c fix keep alive
this reference to keep alive was missed in #12041 so chat has a
diffferent behaviour than generate
2025-09-30 17:22:28 -07:00
Jesse Gross
3d0b1734c0 ggml: Preallocate CUDA pool memory
The GGML CUDA backend allocates additional memory for intermediate
results during calculation. This memory isn't currently allocated
during worst case graph reservation and therefore not included in
scheduling. This means that as these buffers potentially grow
with context length, we could crash.

This extends the memory allocation system down layer from the GGML
graph to the CUDA layer, preallocating the worst case memory there
as well.

Fixes #11753
2025-09-30 15:04:43 -07:00
Jesse Gross
efaee8c2d6 ggml: Backport scale kernel fixes
The GGML scale kernel uses signed 32-bit ints to represent
the number of elements in the tensor. For large images,
mistral-small3.2 overflows this, triggering CUDA errors due
to negative arguments.

Currently, this can happen when the user passes a large image
to mistral-small3.2. However, with upcoming changes to reserve
CUDA memory, it happens every time mistral-small is loaded as
we reserve using a worst case batch.

This patch is part of an upstream GGML commit and should be removed
after GGML is updated past 0a1b398 "ggml: add ops for WAN video model
(cuda && cpu) (#15669)".

Fixes #10388
2025-09-30 15:04:43 -07:00
Jesse Gross
734b57da0e ggml: Remove allocation status reporting
For each memory allocation we report the size of the (attempted)
allocation and whether it succeeded or failed. The latter status
reporting proved to be not that useful in practice as systems
such as Windows can automatically overflow from VRAM into RAM,
resultings in successful allocations even when there isn't
enough memory where we wanted.

As a result, this information is only used for debug logging,
which isn't worthwhile enough for the amount of code. It
also isn't fully accurate, as multiple allocations may result
in partial failures.
2025-09-30 15:04:43 -07:00
Devon Rifkin
83021fcf0f qwen3-coder: fix tool definition type rendering 2025-09-30 15:03:15 -07:00
Michael Yang
0469861d9d build: call find_package to instantiate library paths 2025-09-30 13:12:46 -07:00
likelovewant
04431b50fa fix 2025-09-28 12:37:28 +08:00
羊撅撅
c47154c08d fix: correct condition for AMDGPU_TARGETS filtering logic (#12412) 2025-09-26 11:38:47 -07:00
Patrick Devine
b04e46da3e bugfix: restore the current runOptions if loading fails in the CLI (#12402)
There are two bugs when using `/load <model>` for a model that doesn't exist, namely:
  1. it will not restore the current model settings if the current model is a thinking model; and
  2. it will crash is the current model is a non-thinking model

This bug fix saves the current runOptions and then restores them if the model load
doesn't happen. It also fixes the crash happening for non-thinking models.
2025-09-25 18:30:45 -07:00
Devon Rifkin
34efbbd3f0 Merge pull request #12417 from ollama/drifkin/qwen3-coder-unicode
parsers: fix unicode handling for qwen3-coder
2025-09-25 15:56:34 -07:00
Devon Rifkin
05ba4ca1f4 parsers: fix unicode handling for qwen3-coder
When trimming whitespace at the end of every chunk, we were iterating
backwards over the string byte-by-byte instead of rune-by-rune.

As an example of how this can cause corruption, suppose we have the
multi-byte character  (`"\u2705"`), which is represented in utf-8 as
the three bytes `0xE2 0x9C 0x85`. It happens that `0x85` is NEL, which
passes `unicode.IsSpace()`. Because we were iterating byte-by-byte, this
caused us to mistakenly slice in the middle of the rune, removing `0x85`
and leaving `0xE2 0x9C`, which beyond being the incorrect place to
slice, is not even a valid utf-8 character.

`trailingWhitespaceLen()` was modified to count from the end in a
rune-aware way. Tests with various multibyte unicode characters were
also added.


Fixes: #12414
2025-09-25 15:47:46 -07:00
Patrick Devine
5a56ff3cf0 cli: add device signin flow when doing ollama push (#12405) 2025-09-25 15:04:43 -07:00
Gabe Goodhart
2fba04b5fb tools: handle the case where a tool call sends "arguments" or "parameters" as a serialized json string (#12413) 2025-09-25 14:37:39 -07:00
Grace
fbd82ba5bb Grace/deepseek v3 migration (#12385)
* init deepseek model file

* temp removal of flash attention implementation

* shapes and proper, can make a pass

* query, key, value have good cosine similarity, but the max diff is a bit high

* Attention block is working! ** with eager for now, have not added the mask line

* Attention block is working! ** with eager for now, have not added the mask line

* working MoE at around 0.95 cosine sim

* added cosine similarity function

* Starting end to end structure

* Trying (and failing) to get rope to work, going to test full thing on tater

* running on tater36... just not the right outputs

* we have the right values for rope... but its still not working?

* chnage Extrapolation Factor to 1

* removed adding residuals twice, removed normalization from shared expert, refactored Norms (Attention, MLP) to be outside the (Attention, MLP) blocks and in the Transformer block instead, add cache setLayer

* Temporary modelfiles for cpu

* change kpass intermediate step to kv, two layer outputs [0,1] look fine

* this calls for 16 chicken nuggets

* whoops

* cleaning up code

* delete stuff we dont need

* getting rid of debug statements for llama cpp

* working with long contexts

* fix long context view error

* reverting some changes I made for files that are not apart of pr

* Added proper tokenizer for deeepseek3

* clean up model and go test

* remove Modelfile

* not passing the tests

* whoops

* how to pass the ci tests

* resolving some of the comments

* rename

* linted and renamed deepseek3 -> deepseek2

* remove name go

* addressed changes - main change was adopting qwen3 naming scheme

* I cannot with linters

* clean up logs

* clean up logs

---------

Co-authored-by: Grace Guo <graceguo@Graces-MBP.localdomain>
Co-authored-by: Grace Guo <graceguo@Graces-MacBook-Pro.local>
Co-authored-by: graceguo <graceguo@tater36.localdomain>
2025-09-24 15:19:47 -07:00
Michael Yang
2e742544bf prefer ollama engine for qwen3moe (#12374) 2025-09-24 11:21:32 -07:00
Devon Rifkin
bbb195a6ff Merge pull request #12393 from ollama/drifkin/fix-built-ins
harmony: don't sanitize built-ins
2025-09-23 23:45:31 -07:00
Devon Rifkin
fd88cd7cb0 harmony: don't sanitize built-ins
In #11910 we started sanitizing function names, but we accidentally were
modifying built-ins like `browser.open` to `browser_open`. This was
removing the special prompt rendering for built-ins, but this wasn't
immediately apparent since the models seem to be reasonably good at
remembering the built-ins even when presented with these slightly
renamed version. This fix prevents built-ins from ever being renamed.
2025-09-23 23:34:55 -07:00
Michael Yang
e1979c571a fix: leaf alt name (#12390)
a leaf node with an alternative name gets all its alternatives names
added into the same branch rather than creating branches themselves
2025-09-23 17:50:53 -07:00
Michael Yang
bf78ed6ee9 add pre:, suf: to tags (#12274) 2025-09-23 16:08:57 -07:00
Michael Yang
a40d427bce multi-regexp pretokenizer (#12325) 2025-09-23 13:21:47 -07:00
Patrick Devine
64883e3c4c auth: fix problems with the ollama keypairs (#12373)
* auth: fix problems with the ollama keypairs

This change adds several fixes including:
  - reading in the pubkey files correctly
  - fixing the push unit test to create a keypair file in a temp directory
  - not return 500 errors for normal status error
2025-09-22 23:20:20 -07:00
Devon Rifkin
41efdd4048 Merge pull request #12339 from ollama/drifkin/harmony-refactor-to-builtin
harmony: remove special casing in routes.go
2025-09-22 13:13:40 -07:00
Daniel Hiltgen
c23e6f4cae tests: add single threaded history test (#12295)
* tests: add single threaded history test

Also tidies up some existing tests to handle more model output variation

* test: add support for testing specific architectures
2025-09-22 11:23:14 -07:00
jmorganca
af060eb250 docs: update cloud.md for cloud models 2025-09-22 13:09:17 -03:00
jmorganca
ae5c33008e docs: move turbo.md to cloud.md 2025-09-22 13:09:17 -03:00
likelovewant
000a3ec8b9 Merge branch 'ollama:main' into main 2025-09-21 10:33:39 +08:00
Devon Rifkin
3677842ff1 Merge pull request #12358 from ollama/drifkin/qwen3-coder-ampersands
parsers: fix `&`s in qwen3coder parameter values
2025-09-20 12:40:33 -07:00
Devon Rifkin
242df70a75 parsers: fix &s in qwen3coder parameter values
In <https://github.com/ollama/ollama/issues/12357> we that the model
will output tool calls such as

```
<function=shell>
<parameter=command>
pwd && ls -la
</parameter>
</function>
```

We parse this using the approach of transforming into valid xml and then
using an xml parser. While we do transform the function and parameter
names, we weren't escaping the parameter values (which in this example
are invalid since `pwd && ls -la` contains unescaped ampersands).

This has been fixed by first transforming the tags in the same way, and
then walking the transformed string and escaping the text in between the
tags. This also fixes a case where `<` in the middle of a parameter
value would cause an xml parse failure.

Fixes: #12357
2025-09-20 12:11:38 -07:00
Patrick Devine
dba39b2eee gemma: fix rope scaling for qat models (#12348)
* gemma: fix rope scaling for qat models

* gofumpt yourself
2025-09-19 15:04:40 -07:00
Michael Yang
9f3a37fd36 fix: model load for unsupported embedding models (#12311)
with #12181, there's now support for embeddings in ollama engine.
this is done by mutating the architecture and adding _embed when it
detects an embedding model. however this introduced a bug where if
an embedding model was run based on an existing ollama engine model
without an embedding implementation, e.g. llama4, it will pass the
initial arch support check but fail when actually loaded.

there's currently two entrypoints to creating a model. previously this
second entrypoint was necessary because calling model.New would also
load the model. since #11818, this is no longer th case so merge them
to reduce complexity
2025-09-18 16:11:08 -07:00
Michael Yang
7460259eb3 feat: qwen3 embed (#12301)
* cleanup

* use pooling.TypeNone

* pooling test

* qwen3 embed
2025-09-18 15:50:32 -07:00
Jeffrey Morgan
22ccdd74c2 server: add unauthorized error to remote chat handler (#12338) 2025-09-18 15:40:31 -07:00
Daniel Hiltgen
0c3d0e7533 build: avoid unbounded parallel builds (#12319)
With the addition of cuda v13, on a clean setup, the level of parallelism
was causing docker desktop to become overwhelmed and compilers
were crashing.  This limits to 8 parallel per build stage, with the ability
to override if you have many more cores available.
2025-09-18 14:57:01 -07:00
Devon Rifkin
e7f56ef3d8 harmony: remove special casing in routes.go
Now that we have a built-in parser abstraction, which was introduced in
<https://github.com/ollama/ollama/pull/12248>, we can modify our harmony
parser to match this and then get rid of nearly all of the
harmony-specific logic in routes.go. We do have a small amount of
code that turns the parser on by default if the architecture matches and
no other built-in parser was provided.

The built-in parser interface was modified in order to handle harmony's
prefill and tool name translation requirements.
2025-09-18 14:55:59 -07:00
Patrick Devine
eb0a5d4459 auth: check the permissions on the private key to see if it's readable (#12336) 2025-09-18 14:34:34 -07:00
Michael Yang
ceac416ec2 fix(integration): check truncated length (#12337) 2025-09-18 14:00:21 -07:00
Patrick Devine
2717dce6fe convert: convert bf16 vision weights to fp16 (#12324)
This change moves back to converting bf16 vision weights to fp16,
specifically if they start with the name "v." (such as v.blk.0.attn_k.weight).

This fixes a bug where converted images are failing because they are trying
to call `im2col` which doesn't have a bf16 kernel in ggml.
2025-09-17 17:43:17 -07:00
frob
9b8187b487 server: skip parsing initial <think> if provided in the prompt for /api/generate (#12289) 2025-09-17 16:39:04 -07:00
Patrick Devine
8b894933a7 engine: add remote proxy (#12307) 2025-09-17 14:40:53 -07:00
Daniel Hiltgen
9c5bf342bc fix: multi-cuda version skew (#12318)
Ensure that in a version skewed multi-cuda setup we use the lowest version for all GPUs
2025-09-17 13:05:09 -07:00
Michael Yang
564b558c92 fix(llama): other llama flavours (#12308)
* fix(llama): rope scale

* spm llama

* skip moe models

* cleanup
2025-09-17 12:12:21 -07:00
Michael Yang
a417ac97ee prefer ollama engine for qwen3 (#12310) 2025-09-17 09:48:21 -07:00
russcoss
05d53457af refactor: use the built-in max/min to simplify the code (#12280)
Signed-off-by: russcoss <russcoss@outlook.com>
2025-09-16 17:14:21 -07:00
Michael Yang
b225508c9b logutil: fix source field (#12279) 2025-09-16 16:18:07 -07:00
Devon Rifkin
fa1c987a29 Merge pull request #12248 from ollama/drifkin/qwen3-coder-parsing
add qwen3-coder tool support
2025-09-16 10:21:43 -07:00
Michael Yang
ad95d5b30b use split activations when possible (#12293)
* use ggml_*_split activations when possible

* forward qkv
2025-09-16 09:51:19 -07:00
Michael Yang
c253433d68 embed: cleanup (#12299)
* cleanup

* use pooling.TypeNone

* pooling test
2025-09-16 09:48:42 -07:00
Beshoy Girgis
a1cff89b30 fix: fix CUDA detection for older GPUs (#12300)
Prioritize GPU compute capability over driver version to ensure
Pascal GPUs (CC 6.1) use compatible CUDA v12 libraries instead of v13.
2025-09-16 07:47:06 -07:00
Daniel Hiltgen
93c64ea1b1 doc: show how to clear the cgo cache (#12298) 2025-09-15 15:45:35 -07:00
Michael Yang
3f6642f6fc model: implement bert in ollama engine (#9080)
* fix truncate

* s/SentencePieceModel/SentencePiece/

* bert

* wordpiece

* refactor pooling

* more tokenizers

* normalize embeddings
2025-09-15 15:35:59 -07:00
Michael Yang
6f7117145f batch: use tensors for outputs (#12185)
this cleans up the model interface slightly without too much impact in
other areas
2025-09-15 14:33:06 -07:00
Devon Rifkin
472feec2ff address comments 2025-09-15 11:46:25 -07:00
Devon Rifkin
47991940d4 add qwen3-coder tool support
The format qwen3-coder uses is relatively unique, both in rendering and
in parsing. To implement parsing, I wrote a custom parser in similar
style to harmony. For the rendering, I found that the logic would be
much more difficult to follow in a template, so I introduced the concept
of a built-in renderer that uses go code, rather than a template to
generate prompts.

I set us up for future built-in parsers and renderers by making it so
they can be specified in a Modelfile like so:

```
RENDERER "qwen3-coder"
PARSER "qwen3-coder"
```

These need to be provided explicitly because the architecture alone is
not enough to understand what format the model expects to receive, and
what format we expect it to output (e.g., qwen3-coder is `qwen3moe`,
which includes other qwen3-family models as well)

I haven't converted harmony to be one of these "built-ins" yet, since
some of it is in flux with the changes @ParthSareen has been making to
move harmony to the runner. It is likely that many other built-ins will
need to move to the runner as well, but I'm able to slightly defer that
decision since qwen3-coder doesn't have thinking (and therefore doesn't
need to be in the runner to make structured outputs work). I expect to
unify harmony with this approach very soon.

Whether a particular model supports tools or thinking was previously
inferred from templates, but without a template we now also use the
parser itself to declare what it supports. If we have future models that
re-use the same parsing format, but have different capabilities, we'll
want to parameterize them and give them different names to be specified
as a `PARSER`.

Misc changes:

- I worked on the renderer by diffing outputs from the reference
  implementation and ours. To make it easier to do this, I extended
  <https://github.com/ollama/ollama/pull/11875> to also support
  returning the prompt via the openai compat layer
2025-09-15 11:33:47 -07:00
likelovewant
9f3f80891d Merge branch 'ollama:main' into main 2025-09-13 10:45:51 +08:00
jmorganca
92b96d54ef Revert "runner: move harmony to runner (#12052)"
This reverts commit 1a558f98e2.
2025-09-12 20:40:14 -03:00
jmorganca
9d56e63dbf Revert "runner: simplify parser entrypoints in runner (#12233)"
This reverts commit 8d6fffaead.
2025-09-12 20:40:14 -03:00
tc-mb
053092185e Fix image cannot be seen with slice image on llama engine
Ollama's recent engine update, llama.cpp, caused all models requiring a slice schema to not display images. As a result, the value of numTokens isn't always the length of the sliced ​​image embed, but rather the end length of the schema. This causes the image embed to not be correctly included during all slice processing.
2025-09-12 16:25:12 -07:00
Daniel Hiltgen
44a6792873 tests: tighten up a few flaky tests (#12271)
Sometimes the context test results are pure emoji's
Thanksgiving has too much variability, so swap for a more straight forward prompt.
2025-09-12 13:59:34 -07:00
Daniel Hiltgen
e4ce68311a cuda: remove compression for better compatibility (#12259)
This retains compatibility with driver 531 and up at the trade-off of space.
2025-09-12 07:59:14 -07:00
Jesse Gross
26214125e8 ollamarunner: Suppress stack trace during memory allocation
Allocation failures can be a normal part of new memory estimates, so
we shouldn't print a stack trace in this case.
2025-09-11 14:30:31 -07:00
Daniel Hiltgen
61fb912ca4 CI: fix windows cuda build (#12246)
* ci: adjust cuda component list

v13 has a different breakdown of the components required to build ollama

* review comments
2025-09-11 12:25:26 -07:00
Jesse Gross
aba1575315 llm: Don't try to load split vision models in the Ollama engine
If a model with a split vision projector is loaded in the Ollama
engine, the projector will be ignored and the model will hallucinate
a response. Instead, fallback and try to load the model in the llama
engine.
2025-09-11 11:41:55 -07:00
Jesse Gross
eb10390de9 llm: Enable new memory estimates by default
New memory estimates (see #11090 for more information) are now
enabled automatically for all models running on the Ollama engine,
improving both stability and performance through more accurate sizing
and allocation. Models running on the llama engine will continue to
use the original style of memory estimation.
2025-09-11 11:21:53 -07:00
Michael Yang
feb18cd710 feat: add dimensions field to embed requests (#12242)
* feat: add field to truncate embeddings

* add openai embeddings for dimensions
2025-09-11 10:36:10 -07:00
fengyuchuanshen
8a7e2055d2 cmd: use slices.Contains to simplify code (#12249) 2025-09-11 09:57:31 -07:00
Jesse Gross
29ddfc2cab ggml: Disable flash attention for gemma2
Our new engine implementation of gemma2 doesn't support flash
attention, which means that it also doesn't support KV cache
quantization. Currently, it is possible to turn these two on,
which will result in a crash.
2025-09-10 16:40:45 -07:00
Jesse Gross
71cb86af3e llm: Remove unneeded warning with flash attention enabled
If flash attention is enabled without KV cache quanitization, we will
currently always get this warning:
level=WARN source=server.go:226 msg="kv cache type not supported by model" type=""
2025-09-10 16:40:45 -07:00
CarbonatedWater.org
5198956372 docs: add ollama-co2 to community integrations (#12230) 2025-09-10 16:37:10 -07:00
Daniel Hiltgen
17a023f34b Add v12 + v13 cuda support (#12000)
* Add support for upcoming NVIDIA Jetsons

The latest Jetsons with JetPack 7 are moving to an SBSA compatible model and
will not require building a JetPack specific variant.

* cuda: bring back dual versions

This adds back dual CUDA versions for our releases,
with v11 and v13 to cover a broad set of GPUs and
driver versions.

* win: break up native builds in build_windows.ps1

* v11 build working on windows and linux

* switch to cuda v12.8 not JIT

* Set CUDA compression to size

* enhance manual install linux docs
2025-09-10 12:05:18 -07:00
Parth Sareen
8d6fffaead runner: simplify parser entrypoints in runner (#12233) 2025-09-10 11:24:42 -07:00
Parth Sareen
20b53eaa72 tests: add tool calling integration test (#12232) 2025-09-09 14:01:11 -07:00
Daniel Hiltgen
6745182885 tests: reduce stress on CPU to 2 models (#12161)
* tests: reduce stress on CPU to 2 models

This should avoid flakes due to systems getting overloaded with 3 (or more) models running concurrently

* tests: allow slow systems to pass on timeout

If a slow system is still streaming a response, and the response
will pass validation, don't fail just because the system is slow.

* test: unload embedding models more quickly
2025-09-09 09:32:15 -07:00
Kashyap Tanuku
f810ec741c readme: add Clueless to community integrations (#12188) 2025-09-08 21:31:29 -07:00
Jesse Gross
e119783e66 llm: Clamp batch size to context size
The context must always be able to store the current batch, so
if the user requests a small context then we should also shrink
the batch to match. This also fixes the TestLongInputContext
test on the new engine. (The old engine already has this behavior.)
2025-09-08 20:40:11 -07:00
Parth Sareen
1a558f98e2 runner: move harmony to runner (#12052) 2025-09-08 15:07:59 -07:00
Gabe Goodhart
7b91c9ce51 Hybrid and recurrent memory estimates (#12186)
This PR updates the memory size estimate logic to better handle recurrent and hybrid-recurrent models which are currently being badly overestimated because the default logic assumes full attention for all layers.

The logic for the sizing of the recurrent layers comes from the llama.cpp implementation

        ggml_tensor * r = ggml_new_tensor_1d(ctx, type_r, hparams.n_embd_r()*mem_size);
        ggml_tensor * s = ggml_new_tensor_1d(ctx, type_s, hparams.n_embd_s()*mem_size);

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-09-08 14:53:22 -07:00
Daniel Hiltgen
950d33aa30 docs: show how to debug nvidia init failures (#12216)
This debug setting can help troubleshoot obscure initialization failures.
2025-09-08 11:39:00 -07:00
Michael Yang
9714e38dd0 fix: nil pointer dereference if cache is nil (#12215) 2025-09-08 09:53:59 -07:00
frob
4378ae4ffa parser: don't check the file type of safetensors to prevent false negatives. (#12176)
* Don't check the file type of safetensor to prevent false negatives.

---------

Co-authored-by: Patrick Devine <patrick@infrahq.com>
2025-09-05 16:27:40 -07:00
likelovewant
501cb38b8c Merge branch 'ollama:main' into main 2025-09-05 17:58:44 +08:00
Michael Yang
5994e8e8fd embedding gemma model (#12181)
* ollama: add embeddings
2025-09-04 09:09:07 -07:00
likelovewant
59e3a35203 Merge branch 'ollama:main' into main 2025-09-04 19:34:11 +08:00
Michael Yang
b3e6120736 more logutil.Trace (#12177) 2025-09-03 17:24:39 -07:00
Michael Yang
fb92b61754 logutil: add Trace and TraceContext helpers (#12110) 2025-09-02 13:09:12 -07:00
Jesse Gross
8149a3c86e llm: Avoid underflow in free memory logging
If a GPU's free memory is less than the reserved amount, we might get
an underflow. Since it is an unsigned uint64, we print this as a large
number rather than the more correct 0. This only affects logging, the
actual layout code already handles this correctly.

Bug #12138
2025-09-02 12:30:26 -07:00
Daniel Hiltgen
0cc90a8186 harden uncaught exception registration (#12120) 2025-09-02 09:43:55 -07:00
pxwanglu
e42300f25b ml: fix struct field name in comment (#12123) 2025-08-31 16:26:11 -07:00
alpha-nerd-nomyo
66e73809a1 readme: add NOMYO Router to community integrations (#12129) 2025-08-31 13:49:10 -07:00
likelovewant
c632fdbad8 Merge branch 'ollama:main' into main 2025-08-31 19:44:41 +08:00
Daniel Hiltgen
517807cdf2 perf: build graph for next batch async to keep GPU busy (#11863)
* perf: build graph for next batch in parallel to keep GPU busy

This refactors the main run loop of the ollama runner to perform the main GPU
intensive tasks (Compute+Floats) in a go routine so we can prepare the next
batch in parallel to reduce the amount of time the GPU stalls waiting for the
next batch of work.

* tests: tune integration tests for ollama engine

This tunes the integration tests to focus more on models supported
by the new engine.
2025-08-29 14:20:28 -07:00
Daniel Hiltgen
ead4a9a1d0 Always filter devices (#12108)
* Always filter devices

Avoid crashing on unsupported AMD iGPUs

* Remove cuda device filtering

This interferes with mixed setups
2025-08-29 12:17:31 -07:00
ofrancon
4383a3ab7a readme: add Neuro SAN to community integrations (#12109) 2025-08-28 12:27:13 -07:00
Jesse Gross
9d97e6a9f1 ggml: Avoid allocating CUDA primary context on unused GPUs
The recent memory management changes caused all GPUs to be visible
to the runner, regardless of whether they are ultimately used. This
caused CUDA devices to allocate a primary context (~300 MB VRAM) on
each GPU, for each model. This is unnecessary, so we can both avoid
touching GPUs that we exclude in the early stage of allocation and
freeing the memory for any that we touch but don't use.

The issue will continue to exist for the old engine, since it touches
all devices during initialization.
2025-08-27 16:24:18 -07:00
Michael Yang
1081532430 fix keep alive (#12041) 2025-08-27 11:51:25 -07:00
Michael Yang
59412fbb43 convert(gptoss): mxfp4 to ggml layout to avoid jit conversion (#12018)
* convert: return bytes written

* ggml flavor mxfp4

* simplify jit conversion

* comment
2025-08-26 16:41:02 -07:00
Michael Yang
86834a2797 convert: fix tensor sorting (#12015)
there's two bugs here.

1. the check for a layer id is incorrect and should be >= 0 since layer
   0 is valid
2. if both tensors have an layer identifier, it will only compare the
   layer id which will return 0 if the tensors are in the same layer.
   instead it should fallback to comparing the full tensor name
2025-08-26 13:57:46 -07:00
Michael Yang
85ccf7354d gptoss: enable flash attention by default (#11996) 2025-08-26 13:34:45 -07:00
Michael Yang
30fb7e19f8 remove extra field attr (#11205) 2025-08-25 09:58:16 -07:00
Jeffrey Morgan
d3450dd52e api: implement stringer for ToolFunctionParameters (#12038) 2025-08-22 16:26:48 -07:00
Jeffrey Morgan
4bcb04ad88 tools: avoid matching braces that are part of tool content (#12039) 2025-08-22 15:22:14 -07:00
Devon Rifkin
e3d5708754 Merge pull request #12021 from ollama/drifkin/thinking-double-emit
thinking: fix double emit when no opening tag
2025-08-22 12:01:37 -07:00
Jeffrey Morgan
4be4dc8717 server: skip parsing initial <think> if provided in the prompt (#12024) 2025-08-22 12:00:16 -07:00
zoupingshi
109d4fc3b4 chore: remove redundant words in comment (#12028)
Signed-off-by: zoupingshi <hangfachang@outlook.com>
2025-08-22 11:00:27 -07:00
Devon Rifkin
2cb0a580f3 thinking: fix double emit when no opening tag
The thinking parser will automatically transition to being a
pass-through if non-whitespace is seen before an opening tag. However,
we weren't clearing the buffer after the first non-whitespace input, so
in practice the first token would be emitted twice.

Added a test that demonstrated this, and then fixed the bug.
2025-08-21 21:03:12 -07:00
Parth Sareen
7cce5aac76 harmony: move harmony parsing into a package (#12016) 2025-08-21 13:56:22 -07:00
likelovewant
131c496340 merge upstream and fix conflicts 2025-08-21 11:24:55 +08:00
Michael Yang
4ae4f47b16 gpt-oss: convert from hugging face format (#11907) 2025-08-20 15:39:18 -07:00
Jesse Gross
073fa31df5 llm: Don't always evict models in CPU-only mode
With old memory estimates, it's currently impossible to load more
than one model at a time when no GPUs are available. This is because
the check for whether we need to evict a model looks to see if all
layers of the new model can be loaded onto GPUs, which is never true
if there are no GPUs. Before the memory management changes, there
was a special code path for CPU-only systems.

This problem does not exist with new memory estimates.

Fixes #11974
2025-08-20 14:31:02 -07:00
Michael Yang
91fc3c48e3 openai: remove reasoning as an api.Options (#11993) 2025-08-20 12:21:42 -07:00
Devon Rifkin
6de62664d9 Merge pull request #11973 from ollama/drifkin/bpe
model: fix boundary in bpe
2025-08-19 22:58:33 -07:00
Devon Rifkin
463a6caad8 model: add bpe roundtripping tests 2025-08-19 22:05:48 -07:00
Devon Rifkin
fc5fb09f51 model: fix boundary in bpe
0x007e is a tilde and was getting adjusted (+0x00a2) to 0x0120 in the
encode, but then in the decode it was getting adjusted down (-0x0100) to
0x0020. The boundary for the +0x00a2 case has been adjusted to fix this

Fixes: #11966
2025-08-19 18:34:49 -07:00
Jesse Gross
05ccb17c6e kvcache: Use Cast instead of Copy for flash attention masks
Flash attention kernels require the mask of the KV cache be a F16
rather than an F32. We can use the GGML operation ggml_cast to do
this rather than doing it ourselves, which allows reuse of a
preallocated buffer in the graph rather than allocating a new one
for each batch. This improves token generation performance with
flash attention by 10-30% (with gpt-oss). This also makes performance
with flash attention better than without it, as expected.
2025-08-19 12:36:28 -07:00
Michael Yang
f804e8a460 disable output_all (#11959) 2025-08-18 17:45:40 -07:00
Kostis
9cfbffafc5 readme: add any-agent to community integrations (#11950) 2025-08-18 14:21:36 -07:00
Ruslan Suleymanov
470d580205 readme: add Andes to community integrations (#11952) 2025-08-18 14:20:28 -07:00
Devon Rifkin
b517bb1c19 Merge pull request #11910 from ollama/drifkin/harmony-fn-names
harmony: convert fn names to be valid ts identifiers
2025-08-18 14:17:47 -07:00
Jesse Gross
e3ade453a8 llm: Check for nil memory data before printing
We dump out our best memory estimate after we complete processing
for any reason, including errors. This is helpful for finding what
what stopped us in error conditions but in some cases we might not
have gotten even the first result yet.

Fixes #11957
2025-08-18 14:05:22 -07:00
Devon Rifkin
048bd4472a harmony: convert fn names to be valid ts identifiers
In <https://github.com/ollama/ollama/issues/11704#issuecomment-3177380197>
I noticed that hyphens in function names could possibly cause the model
to become confused. Later in that issue I found other explanations, but
at a minimum tool names with spaces in them are confusing to the model
because of the prompt format.

In this change I create a mapper that converts arbitrary tool names into
valid typescript identifiers. It's a little overly strict in that it
doesn't allow all unicode characters that might be valid in ts
identifiers, but it's still very permissive. Since mappings aren't
reversible, we must temporarily store this mapping in order to unmap it
if the model comes back with a call. We also handle the case where
multiple mappings collide into the same mapping and append a counter to
the end to make them unique
2025-08-18 14:05:16 -07:00
Devon Rifkin
ec8bf5e6c5 Merge pull request #11875 from ollama/drifkin/print-template
server: add debug option for printing out prompt instead of calling model
2025-08-18 14:03:14 -07:00
Kostis
709bbb0b6d readme: add any-llm to community integrations (#11956) 2025-08-18 13:13:26 -07:00
Jody Doolittle
abeec240f9 readme: add Serene Pub to community integrations (#11946) 2025-08-18 13:12:41 -07:00
Michael Yang
df335aac09 gpt-oss: disable quantized kv cache (#11929) 2025-08-15 15:01:05 -07:00
Patrick Devine
026bc29237 cli: show the default context length env setting in online help (#11928) 2025-08-15 14:59:52 -07:00
Thomas Pelster
883d031268 docs: added missing comma in 'Ollama's Javascript library'' (#11915) 2025-08-15 14:45:01 -07:00
Daniel Hiltgen
5271ff8559 handle cgo flags in docker build (#11909)
Docker build requires build-args to be defined.  This ensures the release.yaml settings will be used.
2025-08-15 14:39:35 -07:00
Daniel Hiltgen
d6f7233a1c test: improve scheduler/concurrency stress tests (#11906)
* test: improve scheduler/concurrency stress tests

The scheduler test used to use approximate memory figures and would often
over or under shoot a systems capcity leading to flaky test results.
This should improve the reliability of this scenario by leveraging
ps output to determinie exactly how many models it takes to
trigger thrashing.

The concurrency test is also refined to target num_parallel + 1 and handle
timeouts better.

With these refinements, TestMultiModelConcurrency was redundant

* test: add parallel generate with history

TestGenerateWithHistory will help verify caching and context
are properly handled while making requests

* test: focus embed tests on embedding models

remove non-embedding models from the embedding tests
2025-08-15 14:37:54 -07:00
Devon Rifkin
8de1da4767 server: add debug option for printing out prompt instead of calling model 2025-08-15 13:52:50 -07:00
Daniel Hiltgen
d925b5350c Revert "cuda: leverage JIT for smaller footprint (#11635)" (#11913)
This reverts commit dc5a645434.
2025-08-14 21:19:23 -07:00
Daniel Hiltgen
6eaf194b85 fix arm linux build when HWCAP2_SVE2 undefined (#11908) 2025-08-14 16:38:53 -07:00
Jesse Gross
d5a0d8d904 llm: New memory management
This changes the memory allocation strategy from upfront estimation to
tracking actual allocations done by the engine and reacting to that. The
goal is avoid issues caused by both under-estimation (crashing) and
over-estimation (low performance due to under-utilized GPUs).

It is currently opt-in and can be enabled for models running on the
Ollama engine by setting OLLAMA_NEW_ESTIMATES=1. Behavior in other
cases is unchanged and will continue to use the existing estimates.
2025-08-14 15:24:01 -07:00
Michael Yang
ef7d26ba2c convert: skip reading into memory when possible (#11507)
if there's no transformation to the tensor and the input and output
types match, copy directly into the writer. also read from a bufio with
a 32K buffer
2025-08-14 15:03:57 -07:00
Michael Yang
1a19df1f3a update vendored llama.cpp and ggml (#11823)
* TEMPORARY: Update the llama.cpp upstream to my fork's Granite Four branch

This will be redone once my branch is merged upstream in llama.cpp

* feat: Update all patches

There are a number that are no longer needed at all:

- 0003-embeddings: Embeddings entirely overhauled on master
- 0008-ensure-KV-cache-is-fully-defragmented: KV caching entirely
    overhauled on master
- 0019-metal-add-mean-kernel-14267: Merged upstream
- 0020-CUDA-add-mean-operation-14313: Merged upstream

* feat: Sync llama.cpp and ggml

* fix: Update rsync-filter for all moved/new/removed files

* fix: Add files missing from sync

* fix: Update ggml rsync-filter for new ggml-cpu/arch subdirs

* fix: Add ggml files missing from sync

* fix: Narrow llama.cpp rsync-filter to not include mtmd main tool cpp files

* fix: Remove mtmd main cpp files

* fix: Add missing include in sampling_ext.cpp

* fix: Update llama.go to use mtmd instead of clip/llava

* fix: Add patch for mtmd_input_text

* chore: Ignore *.patched in the patch directory

* fix: Fix support for arch-specific ggml-cpu source files with new arrangement

In https://github.com/ggml-org/llama.cpp/pull/13892, all arch-specific
implementations were split out into a nested tree structure under
ggml-cpu/arch. This conflicts with standard CGO layout where all
arch-specific source files are expected to live in the same directory as
the parent go module and use suffixes based on GOOS and GOARCH. As such,
there were really two options for getting this to work:

1. Add a patch on top of the GGML sync to rearrange the files to match the
GO layout convention
2. Use CGO directives to conditionally include the nested source files in
the compilation units

This commit does (2) in order to minimize the set of changes needed on top
of the upstream file layout. To get this to work, there are two key things
needed:

1. In cpu.go, #cgo directives are added to explicitly set __${GOARCH}__ in
the preprocessor directives
2. In arch-impls.c|cpp, use an #ifdef | #elif defined | #endif chain to
explicitly include the .c|.cpp files for the given architecture from the
nested directory

* fix: Use mtmd_helper to correctly load the bitmap for the image

* fix: Apply patch for mtmd_text_input

* fix: Add missing stb to llama.cpp rsync-filter

* fix: Add sync'ed stb vendored header

* fix: Use c++17 and include vendor for go wrapper modules

* fix: Update patch 0015 for upstream implementation of uuid

* feat: Bump to the latest tip of the branch

* fix: Update patches for bump

* feat: Bump back to the cenral repo and point at the latest master

This includes granite 4 and a number of other model architectures!

* fix: Revert changes to ggml export GPU UUID patch

* fix: Add patch for GGML_VERSION and GGML_COMMIT constants

* feat: Sync all patched code

* build: Include cmake/common.cmake in ggml sync

* build: Add top-level include for GNUINstallDirs in CMakeLists.txt

This is used to populate CMAKE_INSTALL_BINDIR

* fix: Add a patch to avoid power throttling API on non-msvc windows builds

* fix: Sync patch changes for ggml-cpu.c

* feat: Bump llama.cpp to 4a4f42

This picks up support for Kimi K2 and PLaMO-2

* feat: Sync llama.cpp

* fix: Handle multi-chunk image encodings from mtmd

* fix: Re-number patches after merge with `main`

* feat: Bump to 41e78c in the makefile

* fix: Fix Solar and argsort/copy patches after bump

* fix: Remove Gemma3n CUDA Graphs patch

It was implemented upstream:
https://github.com/ggml-org/llama.cpp/pull/14741

* feat: Sync llama.cpp / ggml after latest bump

* build: Remove unnecessary CFLAGS definitions in cpu.go

* fix: Remove unnecessary additions in the rsync-filter

* fix: Remove unused vendored code for chat template parsing

* Revert "fix: Remove Gemma3n CUDA Graphs patch"

This reverts commit d724caced3ce21f08924d4b7801f94ce6638f6ea.

* fix: Update 0020 CUDA Graphs for gemma3n to keep both llama.cpp and ollama fixes

https://github.com/ollama/ollama/pull/11195#issuecomment-3137312394

* fix: Sync ggml-cuda.cu after keeping both style cuda graph fixes for gemma3n

* unwind mxfp4 patch

Prepare to bump ggml with their impl for mxfp4

* bump

* fix windows build error

* Convert tensors at load time

Repack the mxfp4 tensors as ggmls kernels expect them to be.

* convert mlp bf16 to f32

* buffer the conversion better

* reshape earlier

* openai swiglu

* add ids

* split qkv, gate_up

* fix nested alt tags

* fast attention

* remove debug messages

* fix lint

* remove redundant test

* remap values only if source/target are different

* add back i32->i32 copy

* refactor cpu quants

* clean up vendor

* update patch instructions

* clean up patches

* remove webgpu

* update mem

* also handle gpt-oss

* revert convert changes

---------

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
Co-authored-by: Gabe Goodhart <ghart@us.ibm.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
2025-08-14 14:42:58 -07:00
Daniel Hiltgen
7ccfd97a93 doc: clarify both rocm and main bundle necessary (#11900)
Some users expect the rocm bundles to be self-sufficient, but are designed to be additive.
2025-08-14 12:54:55 -07:00
Daniel Hiltgen
c385ca8672 test: add valid responses (#11902)
some of the new models need a few more valid responses to pass
2025-08-14 11:07:13 -07:00
Daniel Hiltgen
837379a94c discovery: fix cudart driver version (#11614)
We prefer the nvcuda library, which reports driver versions. When we
dropped cuda v11, we added a safety check for too-old drivers.  What
we missed was the cudart fallback discovery logic didn't have driver
version wired up.  This fixes cudart discovery to expose the driver
version as well so we no longer reject all GPUs if nvcuda didn't work.
2025-08-13 15:43:33 -07:00
Daniel Hiltgen
a24f90604f int: adjust a few models for integration tests (#11872) 2025-08-13 15:42:36 -07:00
Daniel Hiltgen
dc5a645434 cuda: leverage JIT for smaller footprint (#11635)
Prior to this change our official binaries contained both JIT PTX code and
the cubin binary code for our chosen compute capabilities. This change
switches to only compile the PTX code and rely on JIT at runtime for
generating the cubin specific to the users GPU.  The cubins are cached
on the users system, so they should only see a small lag on the very
first model load for a given Ollama release.  This also adds the first
generation of Blackwell GPUs so they aren't reliant on the Hopper PTX.

This change reduces the ggml-cuda.dll from 1.2G to 460M
2025-08-13 15:42:16 -07:00
youzichuan
bb71654ebe chore: fix some inconsistent function name in comment
Signed-off-by: youzichuan <youzichuan6@outlook.com>
2025-08-13 09:50:27 -07:00
likelovewant
d4af9f04f9 Merge branch 'ollama:main' into main 2025-08-13 12:36:50 +08:00
Jesse Gross
a343ae53a4 ggml: Use ordinal IDs for AMD GPUs on Linux when UUID is unavailable
Some AMD GPUs do not provide UUIDs and report only "XX". In these
cases, we should use the ordinal ID as an alternate identifier.
This is the same as we always need to do on Windows for AMD.

In addition, this prints out the ID for each GPU when enumerating
them for easier debugging in the future.
2025-08-12 16:56:14 -07:00
Michael Yang
d0cf6c8281 fix(openai): handle reasoning_effort (#11868) 2025-08-12 11:02:01 -07:00
Jesse Gross
8f4ec9ab28 discover: CPU supports flash attention
We already run flash attention on CPUs in cases where we have
partial offloading but were disabling it if running on pure CPU,
 which is unnecessary.
2025-08-11 15:00:34 -07:00
Devon Rifkin
dbfd7bd027 Merge pull request #11861 from ollama/drifkin/fix-parsing-error
server: fix error when parsing bad harmony tool calls
2025-08-11 14:59:57 -07:00
Devon Rifkin
ee04dbba51 server: fix error when parsing bad harmony tool calls
Thanks @moll for reporting!

Fixes: #11781
2025-08-11 14:09:13 -07:00
Daniel Andersen
ea7657b54a sched: Add support for grouping GPUs (#10678)
This patch modifies Ollama to allow grouping GPUs to memory-fit to the requested model, instead of the former algorithm of using one GPU distributing over all available GPUs.

Benefits:
 - Lower amount of (PCIe-)bus communication between GPUs - especially when they are not very high speed
 - Allowing unallocated GPUs to get into power-saving mode.
 - Significantly reduce VRAM allocation when using more than 2 GPUs in a system
 - Due to the reduced memory allocation, you can run more models simultaneously.
2025-08-11 13:59:38 -07:00
Michael Vorburger
2c776f0780 CONTRIBUTING: Explicitly note docs:... as a good example (#11755) 2025-08-09 18:12:30 -07:00
Jesse Gross
79f6376f5b ggml: No-alloc mode
Callers can set a backend buffer type to be no-alloc, meaning that
it does not allocate memory for tensors or operations. This can
be used for calculating memory requirements. Tensors and graphs
must be recreated with no-alloc set to false before loading data.

Defaults to false for newly created backend buffer types.
2025-08-08 14:57:13 -07:00
Jesse Gross
756c78cfc7 ggml: Support closing backends
In order to iteratively find the best memory allocation, we need to
be able to free backend memory so we can try again.
2025-08-08 14:57:13 -07:00
Jesse Gross
d7f4f788d1 ggml: Use GGML's typedef'ed pointer types
For many backend data structures, GGML defines a typedef of a pointer
type and returns these from functions. In most cases, CGo understands
that these are interchangable but some parts of Go (such as generics)
think they are two different types. We should prefer the form that
GGML uses.
2025-08-08 14:57:13 -07:00
Daniel Hiltgen
114c3f2265 tests: add integration coverage for oss-gpt (#11696)
Also wires up support to override the default "smol" model
2025-08-07 15:06:57 -07:00
Jesse Gross
f2e9c9aff5 server: Reduce gpt-oss context length for small VRAM GPUs
gpt-oss works best with a context length of at least 8k. However,
for GPUs with limited amount of VRAM, there is a significant
performance hit to this increased context. In these cases, we
switch to the Ollama default of 4k
2025-08-07 14:23:55 -07:00
Devon Rifkin
aa9d889522 Merge pull request #11765 from ollama/drifkin/thinking-without-content
openai: always provide reasoning
2025-08-06 19:02:23 -07:00
Devon Rifkin
735c41f9ca openai: always provide reasoning
We were missing passing along thinking if content was nil (as opposed
to empty string)

Also added a test for content not being passed, which was the real cause
of <https://github.com/ollama/ollama/issues/11704>, since with the way
`Content` is typed, not passing it and empty string are distinct
2025-08-06 18:54:20 -07:00
Devon Rifkin
223a619468 Merge pull request #11761 from ollama/drifkin/openai-tool-names
openai: when converting role=tool messages, propagate the tool name
2025-08-06 17:53:25 -07:00
Devon Rifkin
759dd78dd6 openai: when converting role=tool messages, propagate the tool name
Added support for converting both `name` and `tool_call_id` fields,
which different clients might provide. `name` is a legacy field from the
OpenAI completions API. For `tool_call_id` we inspect previous messages
and look for a matching tool call ID and grab its name

Issue: https://github.com/ollama/ollama/issues/11704
2025-08-06 17:00:24 -07:00
Patrick Devine
44bc36d063 docs: update the faq (#11760) 2025-08-06 16:55:57 -07:00
Devon Rifkin
8f14e1f5f6 Merge pull request #11759 from ollama/drifkin/oai-tool-calling
openai: allow for content _and_ tool calls in the same message
2025-08-06 16:11:31 -07:00
Devon Rifkin
203c137810 openai: allow for content _and_ tool calls in the same message
Previously our OpenAI chat completions compat layer assumed that tool
calls and content would never be provided together, but this is not a
correct assumption. Content is only optional when tool calls are
present, but tool calls and content can be provided together

Fixes: https://github.com/ollama/ollama/issues/11704
2025-08-06 15:50:30 -07:00
Daniel Hiltgen
fa8be9e35c clean up debugging (#11756) 2025-08-06 13:31:22 -07:00
Gao feng
8a75e9ee15 Update downloading to pulling in api.md (#11170)
update api.md to make it consist with code.
https://github.com/ollama/ollama/blob/main/server/download.go#L447
2025-08-06 11:33:09 -07:00
likelovewant
9231379bce remove gfx900 2025-08-06 09:46:23 +08:00
likelovewant
c7ba6128b4 remove gfx900 2025-08-06 09:43:21 +08:00
likelovewant
8970233a2b add 2025-08-06 09:36:32 +08:00
likelovewant
cde948f976 fix gfx1200 2025-08-06 09:29:22 +08:00
likelovewant
7c8aba0d83 Merge branch 'ollama:main' into main 2025-08-06 09:25:22 +08:00
Parth Sareen
4742e12c23 docs: update turbo model name (#11707) 2025-08-05 17:29:08 -07:00
Devon Rifkin
2d06977ade Merge pull request #11705 from ollama/drifkin/fn-schema
tools: support anyOf types
2025-08-05 17:02:42 -07:00
Devon Rifkin
30f8a68c4c tools: support anyOf types
afaik gpt-oss is the first model that meaningfully transforms tool
function definitions in its template. We found that relatively common
definitions that include `anyOf` were not working because the template
was assuming that types were always defined via a `type` field.

anyOf allows for fully recursive types, so I exposed a
`toTypeScriptType()` function to handle this recursive logic in go and
keep the templates cleaner. The gpt-oss templates will need to be
updated to use this.

We should keep building out our function definition support to more
fully support the parts of json schema that make sense for this use
case, but in the meantime this will unblock some users (e.g., zed's
ollama integration w/ gpt-oss). Probably the most urgent is proper array
support
2025-08-05 16:46:24 -07:00
Daniel Hiltgen
e378e33421 win: static link msvc libs (#11612)
This should help reduce the runtime dependencies on windows.
2025-08-05 16:10:42 -07:00
Michael Yang
fcec04bf42 gptoss: fix memory calc (#11700) 2025-08-05 15:56:12 -07:00
Jeffrey Morgan
ee92ca3e1d docs: add docs for Ollama Turbo (#11687) 2025-08-05 13:09:10 -07:00
Jesse Gross
8253ad4d2b ggml: Prevent kv cache quanitization on gpt-oss
KV cache quantization has a dependency on the flash attention kernel.
We currently cannot use flash attention with gpt-oss as it requires
additional operations.

The model definition does not call flash attention, so it works
regardless of the setting but the cache will pick up the
quantization type. This updates the flash attention setting earlier
in the loading flow so that all downstream settings are also set correctly.

Fixes: #11671
2025-08-05 13:04:03 -07:00
Michael Yang
fa7776fd24 gpt-oss (#11672)
* bf16

* tests

* gpt-oss

* enable gptoss for engine

* rough estimate

* convert to mxfp4

* handle safetensors U8

* clamp glu/linear

* update tokenizer

* MXFP4 support

This implements the Open Compute Microscaling (MX) FP4 format
as a tensor type with backend implementations focusing
on mulmat and mulmatid on CPU, CUDA, and Metal.

* Unit tests for MXFP4 support

This exercises various operations and shapes on both CPU and GPU (if detected
on the system)

* cuda graph

* unit test adjustments

* cuda: optimize memory access

Read 4 bytes at a time (8 elements) when performing mul_mat_vec_mxfp4

* mac: fix crash on old macos versions

cblas_sgemm is only supported on v13.3 and up, however bf16 is
only supported on v14+ so we were falling back to ggml-blas and
crashing on bf16 tensors.  Checking for the function being null
seems to be the simplest way to condittionally avoid registering the
backend.

* server: Minimum context length for gptoss

This model requires a minimum context length of 8192 to function
effectively. Users can set higher values through all normal mechanisms
but lower values will be silently reset.

* ggml: Multiply by numParallel for gptoss sliding window

When computing the graph size estimate, the context size is already
multiplied by numParallel so estimates reflect that. However, since
sliding window models use a smaller, fixed context size, they need
to manually take numParallel into account.

* gpt-oss integration

includes harmony parser and thinking levels, etc.

* fix sync

* fix tests

* fix lint

---------

Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Devon Rifkin <drifkin@drifkin.net>
2025-08-05 12:21:16 -07:00
Jesse Gross
0d38b66502 kvcache: Log contents of cache when unable to find a slot
There is a bug when using sliding window attention where we run
out of KV cache slots. This is likely due to not correctly removing
all of the entries as they slide out of range. This adds additional
logging when this occurs to track down the source.

Bug #10127
2025-08-04 16:59:29 -07:00
likelovewant
e5e077b4b7 Merge branch 'ollama:main' into main 2025-08-03 08:22:07 +08:00
Jesse Gross
4183bb0574 kvcache: Enable SWA to retain additional entries
Models that use sliding window attention can only resume a sequence
from the cache if it falls within the saved windows. This works well
if the next message picks up where the old one left off. However, it
generally prevents a partial prefix match unless the entire conversation
falls within the sliding window.

This can be a problem with reasoning models where the traces are
supposed to be removed from future messages, forcing the entire
history to be re-evaluated.

This change allows models to specify that a larger amount of the
history be retained in memory, to allow more partial resumption.
It still respects the window that the model was trained on for
token generation.
2025-07-31 14:48:01 -07:00
Sajal Kulshreshtha
ff89ba90bc fixing broken AMD driver link (#11579) 2025-07-30 12:02:54 -07:00
Daniel Hiltgen
6dcc5dfb9c Revert "CI: switch back to x86 macos builder" (#11588)
This reverts commit 9d071e6089319b37acf62bb739e3430dcb2ac0c3.
2025-07-30 08:56:01 -07:00
Daniel Hiltgen
25911a6e6b mac: disable bf16 on unsupported OS versions (#11585)
Support for bf16 was added in MacOS v14+ and attempting to enable
on older versions causes runtime failures.
2025-07-30 08:50:54 -07:00
Daniel Hiltgen
8afa6e83f2 CI: switch back to x86 macos builder (#11572) 2025-07-29 16:41:25 -07:00
Oliver Simons
ea85e27bbd Increase performance for Gemma3n models on NVGPUs by enabling CUDA Graph execution (#11525)
* Enable CUDA Graphs for gemma3n.

Similar to
https://github.com/ggml-org/llama.cpp/pull/14741,
though ollama has a slightly different model graph
than llama.cpp which requires different workaround
checks.

* Remove residual check by reshaping differently in gemma3n model

This should make the heuristics more robust
2025-07-29 12:37:06 -07:00
Jesse Gross
c116a7523d kvcache: Don't shift empty batches
When we context shift, we delete half the context and apply RoPE
with an offset to the other half. We used to RoPE across the entire
context in a single pass with a zero offset for the deleted
section. With the change to shifting in batches, we can skip any
batches where all of the offsets would be zero. This typically
reduces the number of operations by half.
2025-07-29 12:32:22 -07:00
Yoshi
3515cc377c docs: fix typos and remove trailing whitespaces (#11554) 2025-07-28 11:19:13 -07:00
Mayan EDMS
bbf66c0b96 readme: add Mayan EDMS to community integrations (#11543) 2025-07-27 15:02:52 -07:00
Jesse Gross
764be7480f kvcache: Group shift operations into batches
Currently, when we need to do a shift on the cache, it is one
RoPE operation on the entire size of the cache (per layer). In
some cases, this can create a compute graph that is larger than
the forward pass since the forward pass is working in batches.
Since we don't consider shifting in our memory estimates, it's
possible for this to cause a crash if we run out of memory.

By limiting the size of the RoPE calls to batch size chunks, we
ensure that the shift will never exceed the size of the forward
pass, since the forward pass will also contain a RoPE of the same
size. This does not have a sigificant impact on performance since
RoPE is a math operation that is mostly proportional to the size
of its inputs.

In theory defrag could have the same issue since it also creates a
compute graph outside of the forward pass, however, since it is
only copies, it does not require any working space.
2025-07-25 16:50:27 -07:00
Ruyut
b72e5adb14 CONTRIBUTING: fix typo in commit message example (#11528) 2025-07-25 14:24:06 -07:00
Patrick Devine
80b538e312 cli: catch upstream errors gracefully (#11512) 2025-07-23 22:16:55 -07:00
Jeffrey Morgan
4f8a0166cc tools: loosen tool argument parsing (#11509) 2025-07-23 21:21:29 -07:00
minxinyi
1e6eab5c33 server: use slices.Equal to simplify code (#11502) 2025-07-23 14:25:39 -07:00
Michael Yang
6c733bf0a6 s#x/exp/maps#maps# (#11506) 2025-07-23 13:23:32 -07:00
Patrick Devine
3bac5cba60 Fix GetModelInfo (#11496)
---------

Co-authored-by: Richard Lyons <frob@cloudstaff.com>
2025-07-22 13:40:47 -07:00
ycomiti
4151ef8cf7 Update linux.md (#11462) 2025-07-22 11:17:31 -07:00
likelovewant
e4ff6e6c0f Merge branch 'ollama:main' into main 2025-07-21 18:52:34 +08:00
Stefan Wärting
82da19c634 readme: add GMAI - Gradle Managed to community integrations (#11461) 2025-07-20 14:55:47 -07:00
Jeffrey Morgan
bdd9d22dfd tools: fix parsing issue when a tool name is a substring of another (#11456)
Co-authored-by: frob <rick+github@frob.com.au>
2025-07-20 14:55:14 -07:00
zmldndx
5fc38d042f readme: update argo description to support deep research (#11455) 2025-07-19 13:29:38 -07:00
likelovewant
475a11d08e Merge branch 'ollama:main' into main 2025-07-18 17:41:30 +08:00
Daniel Hiltgen
191d94289d ci: switch mac builder to arm64 (#11379)
The macos-13 is x86, while macos-13-xlarge is arm64
2025-07-17 07:33:44 -07:00
frob
802ad16ce4 docs: add the no-Modelfile function of ollama create (#9077) 2025-07-16 22:16:10 -07:00
frob
5e67f4f90e openai: allow openai endpoint to accept webp images (#11412)
Co-authored-by: Richard Lyons <frob@cloudstaff.com>
2025-07-16 21:31:49 -07:00
Haiyue Wang
e840ccb523 readme: update the llama.cpp github link (#11427) 2025-07-16 21:20:28 -07:00
Michael Yang
b4fe3adc0a compile bf16 support into ggml-metal (#11430) 2025-07-16 17:32:57 -07:00
Parth Sareen
d73f8aa8c3 cmd: add default assistant role to message construction (#11431) 2025-07-16 11:18:16 -07:00
Bruce MacDonald
92c2e8a56c api: fix unreachable status err (#11423)
StatusError was unreachable, the client always checked for error messages in the response body first, and the server always includes error messages with HTTP error status codes.
2025-07-16 11:03:28 -07:00
Marcelo Fornet
2e3fd86d48 docs: fix typo in macos.md (#11425) 2025-07-16 10:50:46 -07:00
先知
4261a3b0b2 docs: update modelfile.md to reflect current default num_ctx (#11189)
As in the commit 44b466eeb2, the default context length has been increased to 4096.
2025-07-11 15:15:00 -07:00
Jesse Gross
acef9b4c1b ggml: Use assigned layers when reporting loading stats
Reporting params.NumGPULayers can be misleading because it is the
requested number of layers, not the actual number that is loaded.
While they are often the same, there are cases where they might mismatch,
such as if the GPU backend is missing.
2025-07-11 14:21:50 -07:00
Jesse Gross
9a43994c45 ggml: Disable unused pipeline parallelism
We're not currently using it, even in cases where we could. Disabling
it improves generation performance by 10-30% with multiple GPUs.
2025-07-11 13:30:05 -07:00
Daniel Hiltgen
f8a6e88819 Only load supported models on new engine (#11362)
* Only load supported models on new engine

Verify the model is supported before trying to load

* int: testcase for all library models
2025-07-11 12:21:54 -07:00
Jesse Gross
35fda7b4af ggml: Report ordinal IDs for AMD GPUs on Windows
We don't get valid UUIDs for AMD GPUs on Windows, so the best option
is to use the ordinal IDs. This brings us in line with what we currently
do on the Ollama server - the only exception is AMD GPUs on Linux, which
falls back to using ordinal IDs. The GGML implementation has no fallback
but it doesn't appear to occur for any of the GPUs that we support.

It's also possible that there are collisions between ordinal IDs for
different libraries - however the only places where we use them are
AMD on Windows and Metal on Mac, which can never occur on the same
system.
2025-07-09 10:35:31 -07:00
Daniel Hiltgen
66fb8575ce doc: add MacOS docs (#11334)
also removes stale model dir instructions for windows
2025-07-08 15:38:04 -07:00
Daniel Hiltgen
20c3266e94 Reduce default parallelism to 1 (#11330)
The current scheduler algorithm of picking the paralellism based on available
VRAM complicates the upcoming dynamic layer memory allocation algorithm.  This
changes the default to 1, with the intent going forward that parallelism is
explicit and will no longer be dynamically determined.  Removal of the dynamic
logic will come in a follow up.
2025-07-08 12:08:37 -07:00
Daniel Hiltgen
34088dbcfb API/CLI context enhancements (#11331)
* API: expose context size of loaded models

* CLI: add context UX

This adds a column in the ps output to show the models context size.
2025-07-08 11:59:06 -07:00
likelovewant
e41dd73705 Merge branch 'ollama:main' into main 2025-07-08 17:07:24 +08:00
Parth Sareen
43107b15b9 add tool_name to api.md (#11326) 2025-07-07 16:53:13 -07:00
Parth Sareen
1f91cb0c8c template: add tool result compatibility (#11294) 2025-07-07 15:53:42 -07:00
Daniel Hiltgen
12d8ad0d38 ci: modularization (#11324)
switch a few constants to variables
2025-07-07 14:07:43 -07:00
Jesse Gross
592d21e7db Revert "ggml: Temporarily disable reporting UUIDs"
The root cause was an unclean upgrade - this code is fine.

This reverts commit 45f216a9c7.
2025-07-07 11:31:02 -07:00
Jeffrey Morgan
5a08b01f5b readme: update Ollama icon size 2025-07-05 17:20:42 -07:00
Daniel Hiltgen
4f473e224c int: add performance integration tests (#11173)
usage example:
  go test --tags=integration,perf -count 1 ./integration -v -timeout 1h -run TestModelsPerf 2>&1 | tee int.log
  cat int.log | grep MODEL_PERF_HEADER | cut -f2- -d: > perf.csv
  cat int.log | grep MODEL_PERF_DATA | cut -f2- -d: >> perf.csv
2025-07-05 16:07:09 -07:00
Daniel Hiltgen
9d60bb44cf doc: add NVIDIA blackwell to supported list (#11307) 2025-07-05 16:06:30 -07:00
Vincent RAMPAL
f371260e75 Update base image to Ubuntu 24.04 LTS (#9681) 2025-07-05 16:02:33 -07:00
Daniel Hiltgen
c9e6d7719e doc: Update link for mac install (#11288)
Favor the dmg now.
2025-07-03 09:48:45 -07:00
Daniel Hiltgen
2c4ce40334 mimic logs for layers on new engine (#11278)
This adds some extra logs to make the new engine a bit more consistent
with the llama engine.
2025-07-02 16:38:36 -07:00
XuKecheng
5d8c173529 readme: add NativeMind to community integrations (#11242) 2025-07-01 09:46:15 -07:00
Jeffrey Morgan
44b17d2bfa tools: fix parsing tool calls with empty arguments, missing required fields (#11233) 2025-06-30 08:59:03 -07:00
likelovewant
4ad87b58bb fix conflicts 2025-06-30 13:32:17 +08:00
Attogram Project
3b8b692218 readme: add ollama-bash-toolshed to community integrations (#11224) 2025-06-29 14:59:54 -07:00
Michael Yang
4129af9205 chore: cleanup comments + unused vars (#11225) 2025-06-27 11:45:33 -07:00
Jesse Gross
45f216a9c7 ggml: Temporarily disable reporting UUIDs
This is causing segfaults, so disable it. Currently UUIDs are only
used for debugging purposes, although they planned to be used in
additional ways in the future.

Bug #11211
2025-06-27 11:27:22 -07:00
Michael Yang
d0b32def60 skip quantizing per_layer_token_embd (#11207)
this tensor isn't compatible with cuda when quantized to q4_K so skip it
2025-06-26 21:49:35 -07:00
Daniel Hiltgen
11ffc36157 ci: multi-stage release process (#11001) 2025-06-26 10:32:48 -07:00
Jeffrey Morgan
ba04902670 fs/ggml: add multiplier in graph estimates (#11208) 2025-06-26 00:19:44 -07:00
Jeffrey Morgan
3944602f51 fs/ggml: add missing architecture to OllamaEngineRequired() (#11206) 2025-06-26 00:11:23 -07:00
Michael Yang
73b642e6f3 add new gemma model (#11204)
* update patches

* cherry pick metal mean kernel

* cherry pick cuda mean kernel

* gemma3n
2025-06-25 21:47:09 -07:00
Daniel Hiltgen
ad118d8b13 ci: arm sbsa fixes (#11194) 2025-06-24 21:00:15 -07:00
Daniel Hiltgen
f08534137b ci: include dependencies 2025-06-24 20:27:43 -07:00
Daniel Hiltgen
4b4a90f233 ci: pick up arm sbsa cuda libs (#11192) 2025-06-24 18:59:22 -07:00
Daniel Hiltgen
03274a6b2f ci: recombine linux amd64 binaries (#11188)
Glue the rocm and archive builds back together.
2025-06-24 18:45:01 -07:00
Devon Rifkin
cc6463ebca Merge pull request #10238 from ollama/drifkin/array-head-count-simple
ggml: fix crash for array head counts
2025-06-24 17:50:02 -07:00
Daniel Hiltgen
405d2f628f ci: rocm parallel builds on windows (#11187)
The preset CMAKE_HIP_FLAGS isn't getting used on Windows.
This passes the parallel flag in through the C/CXX flags, along
with suppression for some log spew warnings to quiet down the build.
2025-06-24 15:27:09 -07:00
Devon Rifkin
a3f7dd3e98 Merge branch 'main' into drifkin/array-head-count-simple 2025-06-24 14:20:05 -07:00
Daniel Hiltgen
c85c0ebf89 CI: switch windows to vs 2022 (#11184)
* CI: switch windows to vs 2022

* ci: fix regex match
2025-06-24 13:26:55 -07:00
Daniel Hiltgen
10a8e04a8d avoid context overflow (#11175)
For smaller context models, make sure we do not exceed the training size.
2025-06-23 15:52:50 -07:00
Daniel Hiltgen
1c6669e64c Re-remove cuda v11 (#10694)
* Re-remove cuda v11

Revert the revert - drop v11 support requiring drivers newer than Feb 23

This reverts commit c6bcdc4223.

* Simplify layout

With only one version of the GPU libraries, we can simplify things down somewhat.  (Jetsons still require special handling)

* distinct sbsa variant for linux arm64

This avoids accidentally trying to load the sbsa cuda libraries on
a jetson system which results in crashes.

* temporary prevent rocm+cuda mixed loading
2025-06-23 14:07:00 -07:00
Devon Rifkin
b2b270ad5d Merge branch 'main' into drifkin/array-head-count-simple 2025-06-23 10:37:31 -07:00
AJ
2bb69b40c7 readme: add ai-hub to community integrations (#11169) 2025-06-23 09:21:12 -07:00
Daniel Hiltgen
65bff664cb build speedups (#11142)
Enable parallel building of the GPU architectures.
2025-06-20 12:32:51 -07:00
Michael Yang
c088ac0e79 convert: utility for merging tensors (#11069) 2025-06-20 11:12:01 -07:00
Michael Yang
0a066cfd91 Reapply "feat: incremental gguf parser (#10822)" (#11114) (#11119)
* Reapply "feat: incremental gguf parser (#10822)" (#11114)

This reverts commit a6e64fbdf2.

* fix older ggufs
2025-06-20 11:11:40 -07:00
Jesse Gross
87b7af6cee ggml: Check return status for computation.
We don't check the return status after computing the graph, which
can silently lead to bad outputs if we try to keep going and future
computation succeeds. This appears to happens in certain cases on
Apple M2 devices.

Fixes #11070
2025-06-19 17:12:49 -07:00
Daniel Hiltgen
f2527b08fb int: add coverage for older models (#11137)
Verified these fail on 0.9.1 and pass on HEAD.
2025-06-19 12:10:19 -07:00
likelovewant
71a4057fcf Merge branch 'ollama:main' into main 2025-06-19 21:11:00 +08:00
likelovewant
5ab7422508 add 2025-06-19 21:05:38 +08:00
Jeffrey Morgan
8bcb3125c1 benchmark: remove unused benchmark test (#11120)
Removes a test under benchmark/ that is unused
2025-06-18 12:58:50 -07:00
Jeffrey Morgan
6baf1e31e2 Revert "Revert "ggml: Export GPU UUIDs" (#11115)" (#11117)
Reverts PR #11115. The original change was mistakingly reverted instead of #10822
2025-06-18 07:30:49 -07:00
Jeffrey Morgan
ed567ef43b Revert "ggml: Export GPU UUIDs" (#11115)
This reverts commit aaa7818000.
2025-06-18 05:45:00 -07:00
Jeffrey Morgan
a6e64fbdf2 Revert "feat: incremental gguf parser (#10822)" (#11114)
This reverts commit 6b04cad7e8.
2025-06-18 05:42:44 -07:00
曹家巧
60cfa2a203 cache: fix comment function name in cache.go (#11110) 2025-06-18 05:21:45 -07:00
Jeffrey Morgan
55bbf3b4a1 tools: return empty arguments object instead of null (#11113) 2025-06-18 05:20:43 -07:00
Jeffrey Morgan
6bda1d2479 tools: fix parsing tool calls without any parameters (#11101)
Fixes issue where tool calls that don't expect any parameters were
not being parsed. This also fixes two additional issues: one where
2+ tool calls would not be correctly parsed, and cases where tool calls
with invalid parameters would still get parsed
2025-06-17 10:51:43 -07:00
likelovewant
50f2219dd6 Merge branch 'ollama:main' into main 2025-06-18 00:20:43 +08:00
Jeffrey Morgan
9e125d884c model: treat 'user defined' tokens as special tokens (#11077) 2025-06-16 16:03:16 -07:00
Michael Yang
a6fbfc880c gguf: fix write order (#11068)
* ggml: test write gguf order
* ggml: fix write tensor order
2025-06-16 10:42:32 -07:00
NGC13009
502028968d readme: add ollama-launcher to community integrations (#11080) 2025-06-15 21:27:49 -07:00
Phil
5a8eb0e151 readme: add GPTranslate to community integrations (#11071) 2025-06-14 08:54:03 -07:00
Jeffrey Morgan
9f8a18ec05 tools: loosen tool parsing to allow for more formats (#11030) 2025-06-12 14:18:54 -07:00
Michael Yang
6b04cad7e8 feat: incremental gguf parser (#10822)
* incremental gguf parser
* gguf: update test to not rely on gguf on disc
* re-use existing create gguf
* read capabilities from gguf kv
* kv exists
* update tests
* s/doneFunc/successFunc/g
* new buffered reader

---------

Co-authored-by: Bruce MacDonald <brucewmacdonald@gmail.com>
2025-06-12 11:04:11 -07:00
Michael Yang
45f56355d5 feat: uneven splits (#11048)
The current splitDim function only operates on tensors that are split evenly which isn't always the case, e.g. a QKV tensor. This change allows the function to be used for arbitrary splits
2025-06-11 12:10:54 -07:00
Michael Yang
0dabb4ef6a skip tokenizer.model if possible (#11050)
if tokenizer.json is already copied, skip tokenizer.model
2025-06-11 12:10:35 -07:00
Michael Yang
2e77aa1ae7 use nn.Linear in place of ml.Tensor (#11049)
while nn.Linear.Forward isn't applicable for sparse MLP, it's still
a nice container for the tensors
2025-06-11 12:10:15 -07:00
Attogram Project
deaabe292d readme: add ollama-multirun to community integrations (#11038) 2025-06-10 14:14:51 -07:00
Jeffrey Morgan
af21a5ac39 readme: update quickstart link text to Gemma 3 2025-06-10 09:34:23 -07:00
Jeffrey Morgan
f63d7f68eb readme: update quickstart example to Gemma 3 2025-06-10 09:33:54 -07:00
Daniel Hiltgen
82ad1dbc07 mac: handle "keep" named apps (#11031)
When a user elects to keep the existing app, the
new Ollama is named `Ollama 2.app`
This fixes the app startup flow to handle this naming pattern.
2025-06-09 16:29:57 -07:00
Daniel Hiltgen
feeabdadd2 spawn desktop quickly (#11011)
Give the desktop app a hint to start fast.
2025-06-08 09:34:52 -07:00
Krzysztof Jeziorny
fc0309615e docs: update link to AMD drivers in linux.md (#10973) 2025-06-06 23:30:04 -04:00
Jeffrey Morgan
09d308d6b6 Revert "server: add model capabilities to the list endpoint (#10174)" (#11004)
This reverts commit 0943001193.
2025-06-06 23:29:14 -04:00
Daniel Hiltgen
a8ed68bd93 launch app hidden (#10962)
When starting the app in the background, start it hidden.
2025-06-06 14:06:29 -07:00
Daniel Hiltgen
2ae65ae471 win: handle more than 2048 processes (#10997)
Fix an array out of bounds crash
2025-06-06 14:06:09 -07:00
Devon Rifkin
a3b6886b7d move thinking logic into its own package (#10990)
move thinking logic into its own package
2025-06-06 12:02:20 -07:00
Hunter Wittenborn
c6a6d7294d docs: fix typo in development.md (#10998) 2025-06-06 12:07:29 -04:00
Devon Rifkin
2cf007c9d1 Merge pull request #10987 from ollama/drifkin/export-thinking-parser
export ThinkingParser
2025-06-05 12:19:14 -07:00
Devon Rifkin
0683efa637 export ThinkingParser 2025-06-05 10:22:32 -07:00
JasonHonKL
0943001193 server: add model capabilities to the list endpoint (#10174) 2025-06-04 11:39:48 -07:00
HardCodeDev
5c42800fca readme: add SimpleOllamaUnity to community integrations (#10817) 2025-05-30 19:50:16 -07:00
Parth Sareen
65f10c2823 tools: resiliency upgrade to name and arg extraction from template (#10917) 2025-05-30 15:18:09 -07:00
Jesse Gross
aaa7818000 ggml: Export GPU UUIDs
This enables matching up devices and information reported by the backend
with system management libraries such as nvml to get accurate free
memory reporting.
2025-05-29 14:01:26 -07:00
Jesse Gross
f15ffc4320 llm: Make "POST predict" error message more informative
"POST predict" basically means that the runner has crashed, which
can have many reasons. However, many people think this is a specific
error and either report only this message or group together unrelated
bugs. This replaces it with a more friendly and helpful message.
2025-05-29 09:41:19 -07:00
Devon Rifkin
20c5fd39c8 Merge branch 'main' into drifkin/array-head-count-simple 2025-05-08 11:46:52 -07:00
Devon Rifkin
d2ee599dcf load arrays with up to 1024 elements when estimating
This mirrors the old behavior before #10382
2025-04-27 13:45:13 -07:00
Devon Rifkin
6ed8898590 ggml: fix crash for array head counts
If it's an array, it uses the max value in the array

If array values for head counts becomes more popular, we can consider a
more invasive change like #10225 to calculate more accurate estimates.

Fixes: #9984
2025-04-27 11:38:06 -07:00
1301 changed files with 508875 additions and 88723 deletions

4
.gitattributes vendored
View File

@@ -15,8 +15,12 @@ ml/backend/**/*.cu linguist-vendored
ml/backend/**/*.cuh linguist-vendored
ml/backend/**/*.m linguist-vendored
ml/backend/**/*.metal linguist-vendored
ml/backend/**/*.comp linguist-vendored
ml/backend/**/*.glsl linguist-vendored
ml/backend/**/CMakeLists.txt linguist-vendored
app/webview linguist-vendored
llama/build-info.cpp linguist-generated
ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.s linguist-generated

View File

@@ -15,49 +15,31 @@ jobs:
environment: release
outputs:
GOFLAGS: ${{ steps.goflags.outputs.GOFLAGS }}
VERSION: ${{ steps.goflags.outputs.VERSION }}
steps:
- uses: actions/checkout@v4
- name: Set environment
id: goflags
run: |
echo GOFLAGS="'-ldflags=-w -s \"-X=github.com/ollama/ollama/version.Version=${GITHUB_REF_NAME#v}\" \"-X=github.com/ollama/ollama/server.mode=release\"'" >>$GITHUB_OUTPUT
echo VERSION="${GITHUB_REF_NAME#v}" >>$GITHUB_OUTPUT
darwin-build:
runs-on: macos-13
runs-on: macos-14-xlarge
environment: release
needs: setup-environment
strategy:
matrix:
os: [darwin]
arch: [amd64, arm64]
env:
GOFLAGS: ${{ needs.setup-environment.outputs.GOFLAGS }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
- run: |
go build -o dist/ .
env:
GOOS: ${{ matrix.os }}
GOARCH: ${{ matrix.arch }}
CGO_ENABLED: 1
CGO_CPPFLAGS: '-mmacosx-version-min=11.3'
- if: matrix.arch == 'amd64'
run: |
cmake --preset CPU -DCMAKE_OSX_DEPLOYMENT_TARGET=11.3 -DCMAKE_SYSTEM_PROCESSOR=x86_64 -DCMAKE_OSX_ARCHITECTURES=x86_64
cmake --build --parallel --preset CPU
cmake --install build --component CPU --strip --parallel 8
- uses: actions/upload-artifact@v4
with:
name: build-${{ matrix.os }}-${{ matrix.arch }}
path: dist/*
darwin-sign:
runs-on: macos-13
environment: release
needs: darwin-build
VERSION: ${{ needs.setup-environment.outputs.VERSION }}
APPLE_IDENTITY: ${{ secrets.APPLE_IDENTITY }}
APPLE_PASSWORD: ${{ secrets.APPLE_PASSWORD }}
APPLE_TEAM_ID: ${{ vars.APPLE_TEAM_ID }}
APPLE_ID: ${{ vars.APPLE_ID }}
MACOS_SIGNING_KEY: ${{ secrets.MACOS_SIGNING_KEY }}
MACOS_SIGNING_KEY_PASSWORD: ${{ secrets.MACOS_SIGNING_KEY_PASSWORD }}
CGO_CFLAGS: '-mmacosx-version-min=14.0 -O3'
CGO_CXXFLAGS: '-mmacosx-version-min=14.0 -O3'
CGO_LDFLAGS: '-mmacosx-version-min=14.0 -O3'
steps:
- uses: actions/checkout@v4
- run: |
@@ -68,33 +50,21 @@ jobs:
security import certificate.p12 -k build.keychain -P $MACOS_SIGNING_KEY_PASSWORD -T /usr/bin/codesign
security set-key-partition-list -S apple-tool:,apple:,codesign: -s -k password build.keychain
security set-keychain-settings -lut 3600 build.keychain
env:
MACOS_SIGNING_KEY: ${{ secrets.MACOS_SIGNING_KEY }}
MACOS_SIGNING_KEY_PASSWORD: ${{ secrets.MACOS_SIGNING_KEY_PASSWORD }}
- uses: actions/download-artifact@v4
- uses: actions/setup-go@v5
with:
name: build-darwin-amd64
path: dist/darwin-amd64
- uses: actions/download-artifact@v4
with:
name: build-darwin-arm64
path: dist/darwin-arm64
go-version-file: go.mod
- run: |
export VERSION=${GITHUB_REF_NAME#v}
./scripts/build_darwin.sh sign macapp
env:
APPLE_IDENTITY: ${{ secrets.APPLE_IDENTITY }}
APPLE_PASSWORD: ${{ secrets.APPLE_PASSWORD }}
APPLE_TEAM_ID: ${{ vars.APPLE_TEAM_ID }}
APPLE_ID: ${{ vars.APPLE_ID }}
SDKROOT: /Applications/Xcode_14.1.0.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk
DEVELOPER_DIR: /Applications/Xcode_14.1.0.app/Contents/Developer
./scripts/build_darwin.sh
- name: Log build results
run: |
ls -l dist/
- uses: actions/upload-artifact@v4
with:
name: dist-darwin
name: bundles-darwin
path: |
dist/Ollama-darwin.zip
dist/ollama-darwin.tgz
dist/*.tgz
dist/*.zip
dist/*.dmg
windows-depends:
strategy:
@@ -103,21 +73,44 @@ jobs:
arch: [amd64]
preset: ['CPU']
include:
- os: windows
arch: amd64
preset: 'CUDA 11'
install: https://developer.download.nvidia.com/compute/cuda/11.3.1/local_installers/cuda_11.3.1_465.89_win10.exe
cuda-version: '11.3'
- os: windows
arch: amd64
preset: 'CUDA 12'
install: https://developer.download.nvidia.com/compute/cuda/12.8.0/local_installers/cuda_12.8.0_571.96_windows.exe
cuda-components:
- '"cudart"'
- '"nvcc"'
- '"cublas"'
- '"cublas_dev"'
cuda-version: '12.8'
flags: ''
- os: windows
arch: amd64
preset: 'CUDA 13'
install: https://developer.download.nvidia.com/compute/cuda/13.0.0/local_installers/cuda_13.0.0_windows.exe
cuda-components:
- '"cudart"'
- '"nvcc"'
- '"cublas"'
- '"cublas_dev"'
- '"crt"'
- '"nvvm"'
- '"nvptxcompiler"'
cuda-version: '13.0'
flags: ''
- os: windows
arch: amd64
preset: 'ROCm 6'
install: https://download.amd.com/developer/eula/rocm-hub/AMD-Software-PRO-Edition-24.Q4-WinSvr2022-For-HIP.exe
rocm-version: '6.2'
flags: '-DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_C_FLAGS="-parallel-jobs=4 -Wno-ignored-attributes -Wno-deprecated-pragma" -DCMAKE_CXX_FLAGS="-parallel-jobs=4 -Wno-ignored-attributes -Wno-deprecated-pragma"'
runner_dir: 'rocm'
- os: windows
arch: amd64
preset: Vulkan
install: https://sdk.lunarg.com/sdk/download/1.4.321.1/windows/vulkansdk-windows-X64-1.4.321.1.exe
flags: ''
runner_dir: 'vulkan'
runs-on: ${{ matrix.arch == 'arm64' && format('{0}-{1}', matrix.os, matrix.arch) || matrix.os }}
environment: release
env:
@@ -127,13 +120,14 @@ jobs:
run: |
choco install -y --no-progress ccache ninja
ccache -o cache_dir=${{ github.workspace }}\.ccache
- if: startsWith(matrix.preset, 'CUDA ') || startsWith(matrix.preset, 'ROCm ')
- if: startsWith(matrix.preset, 'CUDA ') || startsWith(matrix.preset, 'ROCm ') || startsWith(matrix.preset, 'Vulkan')
id: cache-install
uses: actions/cache/restore@v4
with:
path: |
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA
C:\Program Files\AMD\ROCm
C:\VulkanSDK
key: ${{ matrix.install }}
- if: startsWith(matrix.preset, 'CUDA ')
name: Install CUDA ${{ matrix.cuda-version }}
@@ -141,7 +135,7 @@ jobs:
$ErrorActionPreference = "Stop"
if ("${{ steps.cache-install.outputs.cache-hit }}" -ne 'true') {
Invoke-WebRequest -Uri "${{ matrix.install }}" -OutFile "install.exe"
$subpackages = @("cudart", "nvcc", "cublas", "cublas_dev") | Foreach-Object {"${_}_${{ matrix.cuda-version }}"}
$subpackages = @(${{ join(matrix.cuda-components, ', ') }}) | Foreach-Object {"${_}_${{ matrix.cuda-version }}"}
Start-Process -FilePath .\install.exe -ArgumentList (@("-s") + $subpackages) -NoNewWindow -Wait
}
@@ -160,6 +154,21 @@ jobs:
echo "$hipPath\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
echo "CC=$hipPath\bin\clang.exe" | Out-File -FilePath $env:GITHUB_ENV -Append
echo "CXX=$hipPath\bin\clang++.exe" | Out-File -FilePath $env:GITHUB_ENV -Append
echo "HIPCXX=$hipPath\bin\clang++.exe" | Out-File -FilePath $env:GITHUB_ENV -Append
echo "HIP_PLATFORM=amd" | Out-File -FilePath $env:GITHUB_ENV -Append
echo "CMAKE_PREFIX_PATH=$hipPath" | Out-File -FilePath $env:GITHUB_ENV -Append
- if: matrix.preset == 'Vulkan'
name: Install Vulkan ${{ matrix.rocm-version }}
run: |
$ErrorActionPreference = "Stop"
if ("${{ steps.cache-install.outputs.cache-hit }}" -ne 'true') {
Invoke-WebRequest -Uri "${{ matrix.install }}" -OutFile "install.exe"
Start-Process -FilePath .\install.exe -ArgumentList "-c","--am","--al","in" -NoNewWindow -Wait
}
$vulkanPath = (Resolve-Path "C:\VulkanSDK\*").path
echo "$vulkanPath\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
echo "VULKAN_SDK=$vulkanPath" >> $env:GITHUB_ENV
- if: matrix.preset == 'CPU'
run: |
echo "CC=clang.exe" | Out-File -FilePath $env:GITHUB_ENV -Append
@@ -170,6 +179,7 @@ jobs:
path: |
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA
C:\Program Files\AMD\ROCm
C:\VulkanSDK
key: ${{ matrix.install }}
- uses: actions/checkout@v4
- uses: actions/cache@v4
@@ -178,13 +188,17 @@ jobs:
key: ccache-${{ matrix.os }}-${{ matrix.arch }}-${{ matrix.preset }}
- name: Build target "${{ matrix.preset }}"
run: |
Import-Module 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\Tools\Microsoft.VisualStudio.DevShell.dll'
Enter-VsDevShell -VsInstallPath 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise' -SkipAutomaticLocation -DevCmdArguments '-arch=x64 -no_logo'
cmake --preset "${{ matrix.preset }}"
cmake --build --parallel --preset "${{ matrix.preset }}"
cmake --install build --component "${{ startsWith(matrix.preset, 'CUDA ') && 'CUDA' || startsWith(matrix.preset, 'ROCm ') && 'HIP' || 'CPU' }}" --strip --parallel 8
Import-Module 'C:\Program Files\Microsoft Visual Studio\2022\Enterprise\Common7\Tools\Microsoft.VisualStudio.DevShell.dll'
Enter-VsDevShell -VsInstallPath 'C:\Program Files\Microsoft Visual Studio\2022\Enterprise' -SkipAutomaticLocation -DevCmdArguments '-arch=x64 -no_logo'
cmake --preset "${{ matrix.preset }}" ${{ matrix.flags }} --install-prefix "$((pwd).Path)\dist\${{ matrix.os }}-${{ matrix.arch }}"
cmake --build --parallel ([Environment]::ProcessorCount) --preset "${{ matrix.preset }}"
cmake --install build --component "${{ startsWith(matrix.preset, 'CUDA ') && 'CUDA' || startsWith(matrix.preset, 'ROCm ') && 'HIP' || startsWith(matrix.preset, 'Vulkan') && 'Vulkan' || 'CPU' }}" --strip
Remove-Item -Path dist\lib\ollama\rocm\rocblas\library\*gfx906* -ErrorAction SilentlyContinue
env:
CMAKE_GENERATOR: Ninja
- name: Log build results
run: |
gci -path .\dist -Recurse -File | ForEach-Object { get-filehash -path $_.FullName -Algorithm SHA256 } | format-list
- uses: actions/upload-artifact@v4
with:
name: depends-${{ matrix.os }}-${{ matrix.arch }}-${{ matrix.preset }}
@@ -195,19 +209,20 @@ jobs:
matrix:
os: [windows]
arch: [amd64, arm64]
include:
- os: windows
arch: amd64
llvmarch: x86_64
- os: windows
arch: arm64
llvmarch: aarch64
runs-on: ${{ matrix.arch == 'arm64' && format('{0}-{1}', matrix.os, matrix.arch) || matrix.os }}
environment: release
needs: [setup-environment]
env:
GOFLAGS: ${{ needs.setup-environment.outputs.GOFLAGS }}
VERSION: ${{ needs.setup-environment.outputs.VERSION }}
steps:
- name: Install AMD64 system dependencies
if: matrix.arch == 'amd64'
run: |
$ErrorActionPreference = "Stop"
Start-Process "C:\msys64\usr\bin\pacman.exe" -ArgumentList @("-S", "--noconfirm", "mingw-w64-clang-x86_64-gcc-compat", "mingw-w64-clang-x86_64-clang") -NoNewWindow -Wait
echo "C:\msys64\usr\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
echo "C:\msys64\clang64\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
- name: Install ARM64 system dependencies
if: matrix.arch == 'arm64'
run: |
@@ -217,38 +232,57 @@ jobs:
iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))
echo "C:\ProgramData\chocolatey\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
Invoke-WebRequest -Uri https://aka.ms/vs/17/release/vc_redist.arm64.exe -OutFile "${{ runner.temp }}\vc_redist.arm64.exe"
Start-Process -FilePath "${{ runner.temp }}\vc_redist.arm64.exe" -ArgumentList @("/install", "/quiet", "/norestart") -NoNewWindow -Wait
choco install -y --no-progress git gzip
echo "C:\Program Files\Git\cmd" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
Invoke-WebRequest -Uri "https://github.com/mstorsjo/llvm-mingw/releases/download/20240619/llvm-mingw-20240619-ucrt-aarch64.zip" -OutFile "${{ runner.temp }}\llvm-mingw-ucrt-aarch64.zip"
Expand-Archive -Path ${{ runner.temp }}\llvm-mingw-ucrt-aarch64.zip -DestinationPath "C:\Program Files\"
$installPath=(Resolve-Path -Path "C:\Program Files\llvm-mingw-*-ucrt-aarch64").path
echo $installPath\bin | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
- name: Install clang and gcc-compat
run: |
$ErrorActionPreference = "Stop"
Set-ExecutionPolicy Bypass -Scope Process -Force
Invoke-WebRequest -Uri "https://github.com/mstorsjo/llvm-mingw/releases/download/20240619/llvm-mingw-20240619-ucrt-${{ matrix.llvmarch }}.zip" -OutFile "${{ runner.temp }}\llvm-mingw-ucrt.zip"
Expand-Archive -Path ${{ runner.temp }}\llvm-mingw-ucrt.zip -DestinationPath "C:\Program Files\"
$installPath=(Resolve-Path -Path "C:\Program Files\llvm-mingw-*-ucrt*").path
echo "$installPath\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
- run: |
go build -o dist/${{ matrix.os }}-${{ matrix.arch }}/ .
- if: matrix.arch == 'arm64'
- name: Verify gcc is actually clang
run: |
Invoke-WebRequest -Uri "https://aka.ms/vs/17/release/vc_redist.arm64.exe" -OutFile "dist\windows-arm64\vc_redist.arm64.exe"
$ErrorActionPreference='Continue'
$version=& gcc -v 2>&1
$version=$version -join "`n"
echo "gcc is $version"
if ($version -notmatch 'clang') {
echo "ERROR: GCC must be clang for proper utf16 handling"
exit 1
}
$ErrorActionPreference='Stop'
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: "20"
- run: |
$env:VERSION='${{ github.ref_name }}' -Replace "v(.*)", '$1'
& .\scripts\build_windows.ps1 buildApp
env:
VCToolsRedistDir: stub
./scripts/build_windows ollama app
- name: Log build results
run: |
gci -path .\dist -Recurse -File | ForEach-Object { get-filehash -path $_.FullName -Algorithm SHA256 } | format-list
- uses: actions/upload-artifact@v4
with:
name: build-${{ matrix.os }}-${{ matrix.arch }}
path: |
dist\${{ matrix.os }}-${{ matrix.arch }}\*.exe
dist\${{ matrix.os }}-${{ matrix.arch }}-app.exe
dist\*
windows-sign:
runs-on: windows-2022
windows-app:
runs-on: windows
environment: release
needs: [windows-depends, windows-build]
needs: [windows-build, windows-depends]
env:
GOFLAGS: ${{ needs.setup-environment.outputs.GOFLAGS }}
VERSION: ${{ needs.setup-environment.outputs.VERSION }}
KEY_CONTAINER: ${{ vars.KEY_CONTAINER }}
steps:
- uses: actions/checkout@v4
- uses: google-github-actions/auth@v2
@@ -265,26 +299,33 @@ jobs:
& "${{ runner.temp }}\plugin\*\kmscng.msi" /quiet
echo "${{ vars.OLLAMA_CERT }}" >ollama_inc.crt
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
- uses: actions/download-artifact@v4
with:
pattern: build-windows-*
path: dist\
pattern: depends-windows*
path: dist
merge-multiple: true
- uses: actions/download-artifact@v4
with:
pattern: depends-windows-amd64-*
path: dist\windows-amd64\
pattern: build-windows*
path: dist
merge-multiple: true
- name: Log dist contents after download
run: |
gci -path .\dist -recurse
- run: |
& .\scripts\build_windows.ps1 gatherDependencies sign buildInstaller distZip
env:
KEY_CONTAINER: ${{ vars.KEY_CONTAINER }}
./scripts/build_windows.ps1 deps sign installer zip
- name: Log contents after build
run: |
gci -path .\dist -Recurse -File | ForEach-Object { get-filehash -path $_.FullName -Algorithm SHA256 } | format-list
- uses: actions/upload-artifact@v4
with:
name: dist-windows
name: bundles-windows
path: |
dist\OllamaSetup.exe
dist\ollama-windows-*.zip
dist/*.zip
dist/OllamaSetup.exe
linux-build:
strategy:
@@ -317,28 +358,34 @@ jobs:
CGO_CFLAGS=${{ env.CGO_CFLAGS }}
CGO_CXXFLAGS=${{ env.CGO_CXXFLAGS }}
outputs: type=local,dest=dist/${{ matrix.os }}-${{ matrix.arch }}
cache-from: type=registry,ref=ollama/ollama:latest
cache-from: type=registry,ref=${{ vars.DOCKER_REPO }}:latest
cache-to: type=inline
- run: |
for COMPONENT in bin/* lib/ollama/*; do
case "$COMPONENT" in
bin/ollama) echo $COMPONENT >>ollama-${{ matrix.os }}-${{ matrix.arch }}.tar.in ;;
lib/ollama/*.so) echo $COMPONENT >>ollama-${{ matrix.os }}-${{ matrix.arch }}.tar.in ;;
lib/ollama/cuda_v11) echo $COMPONENT >>ollama-${{ matrix.os }}-${{ matrix.arch }}.tar.in ;;
lib/ollama/cuda_v12) echo $COMPONENT >>ollama-${{ matrix.os }}-${{ matrix.arch }}.tar.in ;;
lib/ollama/cuda_jetpack5) echo $COMPONENT >>ollama-${{ matrix.os }}-${{ matrix.arch }}-jetpack5.tar.in ;;
lib/ollama/cuda_jetpack6) echo $COMPONENT >>ollama-${{ matrix.os }}-${{ matrix.arch }}-jetpack6.tar.in ;;
lib/ollama/rocm) echo $COMPONENT >>ollama-${{ matrix.os }}-${{ matrix.arch }}-rocm.tar.in ;;
bin/ollama) echo $COMPONENT >>ollama-${{ matrix.os }}-${{ matrix.arch }}.tar.in ;;
lib/ollama/*.so*) echo $COMPONENT >>ollama-${{ matrix.os }}-${{ matrix.arch }}.tar.in ;;
lib/ollama/cuda_v*) echo $COMPONENT >>ollama-${{ matrix.os }}-${{ matrix.arch }}.tar.in ;;
lib/ollama/vulkan*) echo $COMPONENT >>ollama-${{ matrix.os }}-${{ matrix.arch }}.tar.in ;;
lib/ollama/cuda_jetpack5) echo $COMPONENT >>ollama-${{ matrix.os }}-${{ matrix.arch }}-jetpack5.tar.in ;;
lib/ollama/cuda_jetpack6) echo $COMPONENT >>ollama-${{ matrix.os }}-${{ matrix.arch }}-jetpack6.tar.in ;;
lib/ollama/rocm) echo $COMPONENT >>ollama-${{ matrix.os }}-${{ matrix.arch }}-rocm.tar.in ;;
esac
done
working-directory: dist/${{ matrix.os }}-${{ matrix.arch }}
- run: |
echo "Manifests"
for ARCHIVE in dist/${{ matrix.os }}-${{ matrix.arch }}/*.tar.in ; do
echo $ARCHIVE
cat $ARCHIVE
done
- run: |
for ARCHIVE in dist/${{ matrix.os }}-${{ matrix.arch }}/*.tar.in; do
tar c -C dist/${{ matrix.os }}-${{ matrix.arch }} -T $ARCHIVE --owner 0 --group 0 | pigz -9vc >$(basename ${ARCHIVE//.*/}.tgz);
done
- uses: actions/upload-artifact@v4
with:
name: dist-${{ matrix.os }}-${{ matrix.arch }}-${{ matrix.target }}
name: bundles-${{ matrix.os }}-${{ matrix.arch }}-${{ matrix.target }}
path: |
*.tgz
@@ -385,8 +432,8 @@ jobs:
context: .
platforms: ${{ matrix.os }}/${{ matrix.arch }}
build-args: ${{ matrix.build-args }}
outputs: type=image,name=ollama/ollama,push-by-digest=true,name-canonical=true,push=true
cache-from: type=registry,ref=ollama/ollama:latest
outputs: type=image,name=${{ vars.DOCKER_REPO }},push-by-digest=true,name-canonical=true,push=true
cache-from: type=registry,ref=${{ vars.DOCKER_REPO }}:latest
cache-to: type=inline
- run: |
mkdir -p ${{ matrix.os }}-${{ matrix.arch }}
@@ -418,7 +465,7 @@ jobs:
latest=false
suffix=${{ matrix.suffix }}
images: |
ollama/ollama
${{ vars.DOCKER_REPO }}
tags: |
type=ref,enable=true,priority=600,prefix=pr-,event=pr
type=semver,pattern={{version}}
@@ -428,31 +475,15 @@ jobs:
path: ${{ runner.temp }}
merge-multiple: true
- run: |
docker buildx imagetools create $(echo '${{ steps.metadata.outputs.json }}' | jq -cr '.tags | map("-t", .) | join(" ")') $(cat *-${{ matrix.suffix }}.txt | xargs printf 'ollama/ollama@%s ')
docker buildx imagetools inspect ollama/ollama:${{ steps.metadata.outputs.version }}
docker buildx imagetools create $(echo '${{ steps.metadata.outputs.json }}' | jq -cr '.tags | map("-t", .) | join(" ")') $(cat *-${{ matrix.suffix }}.txt | xargs printf '${{ vars.DOCKER_REPO }}@%s ')
docker buildx imagetools inspect ${{ vars.DOCKER_REPO }}:${{ steps.metadata.outputs.version }}
working-directory: ${{ runner.temp }}
# Trigger downstream release process
trigger:
# Final release process
release:
runs-on: ubuntu-latest
environment: release
needs: [darwin-build, windows-build, windows-depends]
steps:
- name: Trigger downstream release process
run: |
curl -L \
-X POST \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${{ secrets.RELEASE_TOKEN }}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
https://api.github.com/repos/ollama/${{ vars.RELEASE_REPO }}/dispatches \
-d "{\"event_type\": \"trigger-workflow\", \"client_payload\": {\"run_id\": \"${GITHUB_RUN_ID}\", \"version\": \"${GITHUB_REF_NAME#v}\"}}"
# Aggregate all the assets and ship a release
release:
needs: [darwin-sign, windows-sign, linux-build]
runs-on: linux
environment: release
needs: [darwin-build, windows-app, linux-build]
permissions:
contents: write
env:
@@ -461,23 +492,18 @@ jobs:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4
with:
name: dist-darwin
path: dist
- uses: actions/download-artifact@v4
with:
name: dist-windows
path: dist
- uses: actions/download-artifact@v4
with:
pattern: dist-linux-*
pattern: bundles-*
path: dist
merge-multiple: true
- run: find . -type f -not -name 'sha256sum.txt' | xargs sha256sum | tee sha256sum.txt
- name: Log dist contents
run: |
ls -l dist/
- name: Generate checksum file
run: find . -type f -not -name 'sha256sum.txt' | xargs sha256sum | tee sha256sum.txt
working-directory: dist
- name: Create or update Release
- name: Create or update Release for tag
run: |
RELEASE_VERSION="$(echo ${GITHUB_REF_NAME} | cut -f1 -d-)"
echo "Looking for existing release for ${RELEASE_VERSION}"
OLD_TAG=$(gh release ls --json name,tagName | jq -r ".[] | select(.name == \"${RELEASE_VERSION}\") | .tagName")
if [ -n "$OLD_TAG" ]; then
@@ -491,5 +517,17 @@ jobs:
--generate-notes \
--prerelease
fi
echo "Uploading artifacts for tag ${GITHUB_REF_NAME}"
gh release upload ${GITHUB_REF_NAME} dist/* --clobber
- name: Upload release artifacts
run: |
pids=()
for payload in dist/*.txt dist/*.zip dist/*.tgz dist/*.exe dist/*.dmg ; do
echo "Uploading $payload"
gh release upload ${GITHUB_REF_NAME} $payload --clobber &
pids[$!]=$!
sleep 1
done
echo "Waiting for uploads to complete"
for pid in "${pids[*]}"; do
wait $pid
done
echo "done"

View File

@@ -36,7 +36,7 @@ jobs:
| xargs python3 -c "import sys; from pathlib import Path; print(any(Path(x).match(glob) for x in sys.argv[1:] for glob in '$*'.split(' ')))"
}
echo changed=$(changed 'llama/llama.cpp/**' 'ml/backend/ggml/ggml/**') | tee -a $GITHUB_OUTPUT
echo changed=$(changed 'llama/llama.cpp/**/*' 'ml/backend/ggml/ggml/**/*') | tee -a $GITHUB_OUTPUT
linux:
needs: [changes]
@@ -46,12 +46,18 @@ jobs:
include:
- preset: CPU
- preset: CUDA
container: nvidia/cuda:11.8.0-devel-ubuntu22.04
container: nvidia/cuda:13.0.0-devel-ubuntu22.04
flags: '-DCMAKE_CUDA_ARCHITECTURES=87'
- preset: ROCm
container: rocm/dev-ubuntu-22.04:6.1.2
extra-packages: rocm-libs
flags: '-DAMDGPU_TARGETS=gfx1010 -DCMAKE_PREFIX_PATH=/opt/rocm'
- preset: Vulkan
container: ubuntu:22.04
extra-packages: >
mesa-vulkan-drivers vulkan-tools
libvulkan1 libvulkan-dev
vulkan-sdk cmake ccache g++ make
runs-on: linux
container: ${{ matrix.container }}
steps:
@@ -59,7 +65,19 @@ jobs:
- run: |
[ -n "${{ matrix.container }}" ] || sudo=sudo
$sudo apt-get update
# Add LunarG Vulkan SDK apt repo for Ubuntu 22.04
if [ "${{ matrix.preset }}" = "Vulkan" ]; then
$sudo apt-get install -y --no-install-recommends wget gnupg ca-certificates software-properties-common
wget -qO - https://packages.lunarg.com/lunarg-signing-key-pub.asc | $sudo gpg --dearmor -o /usr/share/keyrings/lunarg-archive-keyring.gpg
# Use signed-by to bind the repo to the installed keyring to avoid NO_PUBKEY
echo "deb [signed-by=/usr/share/keyrings/lunarg-archive-keyring.gpg] https://packages.lunarg.com/vulkan/1.4.313 jammy main" | $sudo tee /etc/apt/sources.list.d/lunarg-vulkan-1.4.313-jammy.list > /dev/null
$sudo apt-get update
fi
$sudo apt-get install -y cmake ccache ${{ matrix.extra-packages }}
# Export VULKAN_SDK if provided by LunarG package (defensive)
if [ -d "/usr/lib/x86_64-linux-gnu/vulkan" ] && [ "${{ matrix.preset }}" = "Vulkan" ]; then
echo "VULKAN_SDK=/usr" >> $GITHUB_ENV
fi
env:
DEBIAN_FRONTEND: noninteractive
- uses: actions/cache@v4
@@ -78,23 +96,35 @@ jobs:
include:
- preset: CPU
- preset: CUDA
install: https://developer.download.nvidia.com/compute/cuda/11.3.1/local_installers/cuda_11.3.1_465.89_win10.exe
install: https://developer.download.nvidia.com/compute/cuda/13.0.0/local_installers/cuda_13.0.0_windows.exe
flags: '-DCMAKE_CUDA_ARCHITECTURES=80'
cuda-components:
- '"cudart"'
- '"nvcc"'
- '"cublas"'
- '"cublas_dev"'
- '"crt"'
- '"nvvm"'
- '"nvptxcompiler"'
cuda-version: '13.0'
- preset: ROCm
install: https://download.amd.com/developer/eula/rocm-hub/AMD-Software-PRO-Edition-24.Q4-WinSvr2022-For-HIP.exe
flags: '-DAMDGPU_TARGETS=gfx1010'
flags: '-DAMDGPU_TARGETS=gfx1010 -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DCMAKE_C_FLAGS="-parallel-jobs=4 -Wno-ignored-attributes -Wno-deprecated-pragma" -DCMAKE_CXX_FLAGS="-parallel-jobs=4 -Wno-ignored-attributes -Wno-deprecated-pragma"'
- preset: Vulkan
install: https://sdk.lunarg.com/sdk/download/1.4.321.1/windows/vulkansdk-windows-X64-1.4.321.1.exe
runs-on: windows
steps:
- run: |
choco install -y --no-progress ccache ninja
ccache -o cache_dir=${{ github.workspace }}\.ccache
- if: matrix.preset == 'CUDA' || matrix.preset == 'ROCm'
- if: matrix.preset == 'CUDA' || matrix.preset == 'ROCm' || matrix.preset == 'Vulkan'
id: cache-install
uses: actions/cache/restore@v4
with:
path: |
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA
C:\Program Files\AMD\ROCm
C:\VulkanSDK
key: ${{ matrix.install }}
- if: matrix.preset == 'CUDA'
name: Install CUDA ${{ matrix.cuda-version }}
@@ -102,7 +132,8 @@ jobs:
$ErrorActionPreference = "Stop"
if ("${{ steps.cache-install.outputs.cache-hit }}" -ne 'true') {
Invoke-WebRequest -Uri "${{ matrix.install }}" -OutFile "install.exe"
Start-Process -FilePath .\install.exe -ArgumentList (@("-s", "cudart_11.3", "nvcc_11.3", "cublas_11.3", "cublas_dev_11.3")) -NoNewWindow -Wait
$subpackages = @(${{ join(matrix.cuda-components, ', ') }}) | Foreach-Object {"${_}_${{ matrix.cuda-version }}"}
Start-Process -FilePath .\install.exe -ArgumentList (@("-s") + $subpackages) -NoNewWindow -Wait
}
$cudaPath = (Resolve-Path "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\*").path
@@ -120,12 +151,28 @@ jobs:
echo "$hipPath\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
echo "CC=$hipPath\bin\clang.exe" | Out-File -FilePath $env:GITHUB_ENV -Append
echo "CXX=$hipPath\bin\clang++.exe" | Out-File -FilePath $env:GITHUB_ENV -Append
echo "HIPCXX=$hipPath\bin\clang++.exe" | Out-File -FilePath $env:GITHUB_ENV -Append
echo "HIP_PLATFORM=amd" | Out-File -FilePath $env:GITHUB_ENV -Append
echo "CMAKE_PREFIX_PATH=$hipPath" | Out-File -FilePath $env:GITHUB_ENV -Append
- if: matrix.preset == 'Vulkan'
name: Install Vulkan ${{ matrix.rocm-version }}
run: |
$ErrorActionPreference = "Stop"
if ("${{ steps.cache-install.outputs.cache-hit }}" -ne 'true') {
Invoke-WebRequest -Uri "${{ matrix.install }}" -OutFile "install.exe"
Start-Process -FilePath .\install.exe -ArgumentList "-c","--am","--al","in" -NoNewWindow -Wait
}
$vulkanPath = (Resolve-Path "C:\VulkanSDK\*").path
echo "$vulkanPath\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
echo "VULKAN_SDK=$vulkanPath" >> $env:GITHUB_ENV
- if: ${{ !cancelled() && steps.cache-install.outputs.cache-hit != 'true' }}
uses: actions/cache/save@v4
with:
path: |
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA
C:\Program Files\AMD\ROCm
C:\VulkanSDK
key: ${{ matrix.install }}
- uses: actions/checkout@v4
- uses: actions/cache@v4
@@ -133,8 +180,8 @@ jobs:
path: ${{ github.workspace }}\.ccache
key: ccache-${{ runner.os }}-${{ runner.arch }}-${{ matrix.preset }}
- run: |
Import-Module 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\Tools\Microsoft.VisualStudio.DevShell.dll'
Enter-VsDevShell -VsInstallPath 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise' -SkipAutomaticLocation -DevCmdArguments '-arch=x64 -no_logo'
Import-Module 'C:\Program Files\Microsoft Visual Studio\2022\Enterprise\Common7\Tools\Microsoft.VisualStudio.DevShell.dll'
Enter-VsDevShell -VsInstallPath 'C:\Program Files\Microsoft Visual Studio\2022\Enterprise' -SkipAutomaticLocation -DevCmdArguments '-arch=x64 -no_logo'
cmake --preset "${{ matrix.preset }}" ${{ matrix.flags }}
cmake --build --parallel --preset "${{ matrix.preset }}"
env:
@@ -154,82 +201,34 @@ jobs:
runs-on: ${{ matrix.os }}
env:
CGO_ENABLED: '1'
GOEXPERIMENT: 'synctest'
steps:
- name: checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # 4.2.2
- name: cache restore
uses: actions/cache/restore@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
# Note: unlike the other setups, this is only grabbing the mod download
# cache, rather than the whole mod directory, as the download cache
# contains zips that can be unpacked in parallel faster than they can be
# fetched and extracted by tar
path: |
~/.cache/go-build
~/go/pkg/mod/cache
~\AppData\Local\go-build
# NOTE: The -3- here should be incremented when the scheme of data to be
# cached changes (e.g. path above changes).
key: ${{ github.job }}-${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-3-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
restore-keys: |
${{ github.job }}-${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-3-${{ hashFiles('**/go.sum') }}
${{ github.job }}-${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-3-
- name: Setup Go
uses: actions/setup-go@v5
go-version-file: 'go.mod'
- uses: actions/setup-node@v4
with:
# The caching strategy of setup-go is less than ideal, and wastes
# time by not saving artifacts due to small failures like the linter
# complaining, etc. This means subsequent have to rebuild their world
# again until all checks pass. For instance, if you mispell a word,
# you're punished until you fix it. This is more hostile than
# helpful.
cache: false
go-version-file: go.mod
# It is tempting to run this in a platform independent way, but the past
# shows this codebase will see introductions of platform specific code
# generation, and so we need to check this per platform to ensure we
# don't abuse go generate on specific platforms.
- name: check that 'go generate' is clean
if: always()
node-version: '20'
- name: Install UI dependencies
working-directory: ./app/ui/app
run: npm ci
- name: Install tscriptify
run: |
go generate ./...
git diff --name-only --exit-code || (echo "Please run 'go generate ./...'." && exit 1)
go install github.com/tkrajina/typescriptify-golang-structs/tscriptify@latest
- name: Run UI tests
if: ${{ startsWith(matrix.os, 'ubuntu') }}
working-directory: ./app/ui/app
run: npm test
- name: Run go generate
run: go generate ./...
- name: go test
if: always()
run: go test -count=1 -benchtime=1x ./...
# TODO(bmizerany): replace this heavy tool with just the
# tools/checks/binaries we want and then make them all run in parallel
# across jobs, not on a single tiny vm on Github Actions.
- uses: golangci/golangci-lint-action@v6
- uses: golangci/golangci-lint-action@v9
with:
args: --timeout 10m0s -v
- name: cache save
# Always save the cache, even if the job fails. The artifacts produced
# during the building of test binaries are not all for naught. They can
# be used to speed up subsequent runs.
if: always()
uses: actions/cache/save@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
with:
# Note: unlike the other setups, this is only grabbing the mod download
# cache, rather than the whole mod directory, as the download cache
# contains zips that can be unpacked in parallel faster than they can be
# fetched and extracted by tar
path: |
~/.cache/go-build
~/go/pkg/mod/cache
~\AppData\Local\go-build
# NOTE: The -3- here should be incremented when the scheme of data to be
# cached changes (e.g. path above changes).
key: ${{ github.job }}-${{ runner.os }}-${{ matrix.goarch }}-${{ matrix.buildflags }}-go-3-${{ hashFiles('**/go.sum') }}-${{ github.run_id }}
only-new-issues: true
patches:
runs-on: ubuntu-latest
@@ -238,4 +237,4 @@ jobs:
- name: Verify patches apply cleanly and do not change files
run: |
make -f Makefile.sync clean checkout apply-patches sync
git diff --compact-summary --exit-code
git diff --compact-summary --exit-code

View File

@@ -1,59 +0,0 @@
name: Upload Release Assets
on:
release:
types: [created]
jobs:
upload-release-assets:
runs-on: windows
steps:
- name: Checkout code
uses: actions/checkout@v2
# This step checks out the code from the repository to the runner.
# Assuming you already have compilation and packaging steps that generate the following files:
# dist/ollama-windows-amd64.7z
# dist/OllamaSetup.exe
- name: Get the version
id: get_version
run: |
echo ::set-output name=VERSION::${GITHUB_REF#refs/tags/}
# This step extracts the version number from the tag name.
- name: Create Release
id: create_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref_name }}
release_name: Release ${{ github.ref_name }}
body: |
Description of the release goes here.
# draft: false (Uncomment if you want to create a non-draft release)
# prerelease: false (Uncomment if you want to create a non-prerelease version)
# This step creates a new release on GitHub.
- name: Upload ollama-windows-amd64.7z Release Asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: dist/ollama-windows-amd64.7z
asset_name: ollama-windows-amd64.7z
asset_content_type: application/x-7z-compressed
# This step uploads the .7z file as a release asset.
- name: Upload OllamaSetup.exe Release Asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: dist/OllamaSetup.exe
asset_name: OllamaSetup.exe
asset_content_type: application/vnd.microsoft.portable-executable
# This step uploads the .exe file as a release asset.

1
.gitignore vendored
View File

@@ -8,6 +8,7 @@
dist
build
.cache
.gocache
*.exe
.idea
test_data

View File

@@ -1,5 +1,4 @@
run:
timeout: 5m
version: "2"
linters:
enable:
- asasalint
@@ -7,35 +6,46 @@ linters:
- bodyclose
- containedctx
- gocheckcompilerdirectives
- gofmt
- gofumpt
- gosimple
- govet
- ineffassign
- intrange
- makezero
- misspell
- nilerr
- nolintlint
- nosprintfhostport
- staticcheck
- unconvert
- usetesting
- wastedassign
- whitespace
disable:
- usestdlibvars
- errcheck
linters-settings:
staticcheck:
checks:
- all
- -SA1019 # omit Deprecated check
- usestdlibvars
settings:
govet:
disable:
- unusedresult
staticcheck:
checks:
- all
- -QF* # disable quick fix suggestions
- -SA1019
- -ST1000 # package comment format
- -ST1003 # underscores in package names
- -ST1005 # error strings should not be capitalized
- -ST1012 # error var naming (ErrFoo)
- -ST1016 # receiver name consistency
- -ST1020 # comment on exported function format
- -ST1021 # comment on exported type format
- -ST1022 # comment on exported var format
- -ST1023 # omit type from declaration
severity:
default-severity: error
default: error
rules:
- linters:
- gofmt
- goimports
- intrange
severity: info
formatters:
enable:
- gofmt
- gofumpt

View File

@@ -3,6 +3,7 @@ cmake_minimum_required(VERSION 3.21)
project(Ollama C CXX)
include(CheckLanguage)
include(GNUInstallDirs)
find_package(Threads REQUIRED)
@@ -37,7 +38,7 @@ if (CMAKE_OSX_ARCHITECTURES MATCHES "x86_64")
endif()
set(OLLAMA_BUILD_DIR ${CMAKE_BINARY_DIR}/lib/ollama)
set(OLLAMA_INSTALL_DIR ${CMAKE_INSTALL_PREFIX}/lib/ollama)
set(OLLAMA_INSTALL_DIR ${CMAKE_INSTALL_PREFIX}/lib/ollama/${OLLAMA_RUNNER_DIR})
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${OLLAMA_BUILD_DIR})
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY_DEBUG ${OLLAMA_BUILD_DIR})
@@ -51,7 +52,7 @@ include_directories(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src/include
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src/ggml-cpu)
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src/ggml-cpu/amx)
add_compile_definitions(NDEBUG)
add_compile_definitions(NDEBUG GGML_VERSION=0x0 GGML_COMMIT=0x0)
set(GGML_CPU ON)
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src)
@@ -78,14 +79,13 @@ if(CMAKE_CUDA_COMPILER)
find_package(CUDAToolkit)
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src/ggml-cuda)
set(OLLAMA_CUDA_INSTALL_DIR ${OLLAMA_INSTALL_DIR}/cuda_v${CUDAToolkit_VERSION_MAJOR})
install(TARGETS ggml-cuda
RUNTIME_DEPENDENCIES
DIRECTORIES ${CUDAToolkit_BIN_DIR} ${CUDAToolkit_LIBRARY_DIR}
DIRECTORIES ${CUDAToolkit_BIN_DIR} ${CUDAToolkit_BIN_DIR}/x64 ${CUDAToolkit_LIBRARY_DIR}
PRE_INCLUDE_REGEXES cublas cublasLt cudart
PRE_EXCLUDE_REGEXES ".*"
RUNTIME DESTINATION ${OLLAMA_CUDA_INSTALL_DIR} COMPONENT CUDA
LIBRARY DESTINATION ${OLLAMA_CUDA_INSTALL_DIR} COMPONENT CUDA
RUNTIME DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT CUDA
LIBRARY DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT CUDA
)
endif()
@@ -99,14 +99,17 @@ check_language(HIP)
if(CMAKE_HIP_COMPILER)
set(HIP_PLATFORM "amd")
find_package(hip REQUIRED)
if(NOT AMDGPU_TARGETS)
list(FILTER AMDGPU_TARGETS INCLUDE REGEX "^gfx(803|900(:xnack-)|902|906(:xnack-)|90c(:xnack-)|1010(:xnack-)|1011(:xnack-)|1012(:xnack-)|103[0-6]|110[0-3]|115[01]|1201)$")
elseif(WIN32 AND WINDOWS_AMDGPU_TARGETS_EXCLUDE_REGEX)
find_package(hip REQUIRED)
list(FILTER AMDGPU_TARGETS INCLUDE REGEX "^gfx(803|90[012]|906(:xnack-)|90c(:xnack-)|1010(:xnack-)|1011(:xnack-)|1012(:xnack-)|103[0-6]|110[0-3]|115[0123]|120[01])$")
endif()
if(WIN32 AND WINDOWS_AMDGPU_TARGETS_EXCLUDE_REGEX)
list(FILTER AMDGPU_TARGETS EXCLUDE REGEX ${WINDOWS_AMDGPU_TARGETS_EXCLUDE_REGEX})
endif()
if(AMDGPU_TARGETS)
find_package(hip REQUIRED)
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src/ggml-hip)
if (WIN32)
@@ -115,22 +118,37 @@ if(CMAKE_HIP_COMPILER)
target_compile_definitions(ggml-hip PRIVATE GGML_HIP_NO_VMM)
set(OLLAMA_HIP_INSTALL_DIR ${OLLAMA_INSTALL_DIR}/rocm)
install(TARGETS ggml-hip
RUNTIME_DEPENDENCIES
RUNTIME_DEPENDENCY_SET rocm
RUNTIME DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT HIP
LIBRARY DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT HIP
)
install(RUNTIME_DEPENDENCY_SET rocm
DIRECTORIES ${HIP_BIN_INSTALL_DIR} ${HIP_LIB_INSTALL_DIR}
PRE_INCLUDE_REGEXES hipblas rocblas amdhip64 rocsolver amd_comgr hsa-runtime64 rocsparse tinfo rocprofiler-register drm drm_amdgpu numa elf
PRE_EXCLUDE_REGEXES ".*"
POST_EXCLUDE_REGEXES "system32"
RUNTIME DESTINATION ${OLLAMA_HIP_INSTALL_DIR} COMPONENT HIP
LIBRARY DESTINATION ${OLLAMA_HIP_INSTALL_DIR} COMPONENT HIP
RUNTIME DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT HIP
LIBRARY DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT HIP
)
foreach(HIP_LIB_BIN_INSTALL_DIR IN ITEMS ${HIP_BIN_INSTALL_DIR} ${HIP_LIB_INSTALL_DIR})
if(EXISTS ${HIP_LIB_BIN_INSTALL_DIR}/rocblas)
install(DIRECTORY ${HIP_LIB_BIN_INSTALL_DIR}/rocblas DESTINATION ${OLLAMA_HIP_INSTALL_DIR} COMPONENT HIP)
install(DIRECTORY ${HIP_LIB_BIN_INSTALL_DIR}/rocblas DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT HIP)
break()
endif()
endforeach()
endif()
endif()
find_package(Vulkan)
if(Vulkan_FOUND)
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/ml/backend/ggml/ggml/src/ggml-vulkan)
install(TARGETS ggml-vulkan
RUNTIME_DEPENDENCIES
PRE_INCLUDE_REGEXES vulkan
PRE_EXCLUDE_REGEXES ".*"
RUNTIME DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT Vulkan
LIBRARY DESTINATION ${OLLAMA_INSTALL_DIR} COMPONENT Vulkan
)
endif()

View File

@@ -6,7 +6,8 @@
"binaryDir": "${sourceDir}/build",
"installDir": "${sourceDir}/dist",
"cacheVariables": {
"CMAKE_BUILD_TYPE": "Release"
"CMAKE_BUILD_TYPE": "Release",
"CMAKE_MSVC_RUNTIME_LIBRARY": "MultiThreaded"
}
},
{
@@ -21,30 +22,43 @@
"name": "CUDA 11",
"inherits": [ "CUDA" ],
"cacheVariables": {
"CMAKE_CUDA_ARCHITECTURES": "50;52;53;60;61;70;75;80;86",
"CMAKE_CUDA_FLAGS": "-Wno-deprecated-gpu-targets"
"CMAKE_CUDA_ARCHITECTURES": "50-virtual;60-virtual;61-virtual;70-virtual;75-virtual;80-virtual;86-virtual;87-virtual;89-virtual;90-virtual",
"CMAKE_CUDA_FLAGS": "-Wno-deprecated-gpu-targets -t 2",
"OLLAMA_RUNNER_DIR": "cuda_v11"
}
},
{
"name": "CUDA 12",
"inherits": [ "CUDA" ],
"cacheVariables": {
"CMAKE_CUDA_ARCHITECTURES": "50;60;61;70;75;80;86;87;89;90;90a;120",
"CMAKE_CUDA_FLAGS": "-Wno-deprecated-gpu-targets"
"CMAKE_CUDA_ARCHITECTURES": "50;52;60;61;70;75;80;86;89;90;90a;120",
"CMAKE_CUDA_FLAGS": "-Wno-deprecated-gpu-targets -t 2",
"OLLAMA_RUNNER_DIR": "cuda_v12"
}
},
{
"name": "CUDA 13",
"inherits": [ "CUDA" ],
"cacheVariables": {
"CMAKE_CUDA_ARCHITECTURES": "75-virtual;80-virtual;86-virtual;87-virtual;89-virtual;90-virtual;90a-virtual;100-virtual;103-virtual;110-virtual;120-virtual;121-virtual",
"CMAKE_CUDA_FLAGS": "-t 2",
"OLLAMA_RUNNER_DIR": "cuda_v13"
}
},
{
"name": "JetPack 5",
"inherits": [ "CUDA" ],
"cacheVariables": {
"CMAKE_CUDA_ARCHITECTURES": "72;87"
"CMAKE_CUDA_ARCHITECTURES": "72;87",
"OLLAMA_RUNNER_DIR": "cuda_jetpack5"
}
},
{
"name": "JetPack 6",
"inherits": [ "CUDA" ],
"cacheVariables": {
"CMAKE_CUDA_ARCHITECTURES": "87"
"CMAKE_CUDA_ARCHITECTURES": "87",
"OLLAMA_RUNNER_DIR": "cuda_jetpack6"
}
},
{
@@ -58,7 +72,16 @@
"name": "ROCm 6",
"inherits": [ "ROCm" ],
"cacheVariables": {
"AMDGPU_TARGETS": "gfx803;gfx902;gfx1030;gfx1031;gfx1032;gfx1034;gfx1035;gfx1036;gfx1100;gfx1101;gfx1102;gfx1103;gfx1150;gfx1201;gfx900:xnack-;gfx906:xnack-;gfx90c:xnack-;gfx1010:xnack-;gfx1011:xnack-;gfx1012:xnack-;"
"CMAKE_HIP_FLAGS": "-parallel-jobs=4",
"AMDGPU_TARGETS": "gfx1030;gfx1031;gfx1032;gfx1034;gfx1035;gfx1036;gfx1100;gfx1101;gfx1102;gfx1103;gfx1150;gfx1151;gfx1152;gfx1153;gfx1200;gfx1201;gfx900:xnack-;gfx906:xnack-;gfx90c:xnack-;gfx1010:xnack-;gfx1011:xnack-;gfx1012:xnack-",
"OLLAMA_RUNNER_DIR": "rocm"
}
},
{
"name": "Vulkan",
"inherits": [ "Default" ],
"cacheVariables": {
"OLLAMA_RUNNER_DIR": "vulkan"
}
}
],
@@ -88,6 +111,11 @@
"inherits": [ "CUDA" ],
"configurePreset": "CUDA 12"
},
{
"name": "CUDA 13",
"inherits": [ "CUDA" ],
"configurePreset": "CUDA 13"
},
{
"name": "JetPack 5",
"inherits": [ "CUDA" ],
@@ -107,6 +135,11 @@
"name": "ROCm 6",
"inherits": [ "ROCm" ],
"configurePreset": "ROCm 6"
},
{
"name": "Vulkan",
"targets": [ "ggml-vulkan" ],
"configurePreset": "Vulkan"
}
]
}

View File

@@ -16,7 +16,7 @@ See the [development documentation](./docs/development.md) for instructions on h
* New features: new features (e.g. API fields, environment variables) add surface area to Ollama and make it harder to maintain in the long run as they cannot be removed without potentially breaking users in the future.
* Refactoring: large code improvements are important, but can be harder or take longer to review and merge.
* Documentation: small updates to fill in or correct missing documentation is helpful, however large documentation additions can be hard to maintain over time.
* Documentation: small updates to fill in or correct missing documentation are helpful, however large documentation additions can be hard to maintain over time.
### Issues that may not be accepted
@@ -43,7 +43,7 @@ Tips for proposals:
* Explain how the change will be tested.
Additionally, for bonus points: Provide draft documentation you would expect to
see if the change were accepted.
see if the changes were accepted.
## Pull requests
@@ -65,7 +65,7 @@ continuation of the sentence:
Examples:
llm/backend/mlx: support the llama architecture
CONTRIBUTING: provide clairity on good commit messages, and bad
CONTRIBUTING: provide clarity on good commit messages, and bad
Bad Examples:

View File

@@ -1,20 +1,33 @@
# vim: filetype=dockerfile
ARG FLAVOR=${TARGETARCH}
ARG PARALLEL=8
ARG ROCMVERSION=6.3.3
ARG JETPACK5VERSION=r35.4.1
ARG JETPACK6VERSION=r36.4.0
ARG CMAKEVERSION=3.31.2
ARG VULKANVERSION=1.4.321.1
# CUDA v11 requires gcc v10. v10.3 has regressions, so the rockylinux 8.5 AppStream has the latest compatible version
# We require gcc v10 minimum. v10.3 has regressions, so the rockylinux 8.5 AppStream has the latest compatible version
FROM --platform=linux/amd64 rocm/dev-almalinux-8:${ROCMVERSION}-complete AS base-amd64
RUN yum install -y yum-utils \
&& yum-config-manager --add-repo https://dl.rockylinux.org/vault/rocky/8.5/AppStream/\$basearch/os/ \
&& rpm --import https://dl.rockylinux.org/pub/rocky/RPM-GPG-KEY-Rocky-8 \
&& dnf install -y yum-utils ccache gcc-toolset-10-gcc-10.2.1-8.2.el8 gcc-toolset-10-gcc-c++-10.2.1-8.2.el8 gcc-toolset-10-binutils-2.35-11.el8 \
&& dnf install -y ccache \
&& yum-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel8/x86_64/cuda-rhel8.repo
ENV PATH=/opt/rh/gcc-toolset-10/root/usr/bin:$PATH
ARG VULKANVERSION
RUN wget https://sdk.lunarg.com/sdk/download/${VULKANVERSION}/linux/vulkansdk-linux-x86_64-${VULKANVERSION}.tar.xz -O /tmp/vulkansdk-linux-x86_64-${VULKANVERSION}.tar.xz \
&& tar xvf /tmp/vulkansdk-linux-x86_64-${VULKANVERSION}.tar.xz \
&& dnf -y install ninja-build \
&& ln -s /usr/bin/python3 /usr/bin/python \
&& /${VULKANVERSION}/vulkansdk -j 8 vulkan-headers \
&& /${VULKANVERSION}/vulkansdk -j 8 shaderc
RUN cp -r /${VULKANVERSION}/x86_64/include/* /usr/local/include/ \
&& cp -r /${VULKANVERSION}/x86_64/lib/* /usr/local/lib
ENV PATH=/${VULKANVERSION}/x86_64/bin:$PATH
FROM --platform=linux/arm64 almalinux:8 AS base-arm64
# install epel-release for ccache
@@ -26,42 +39,67 @@ ENV CC=clang CXX=clang++
FROM base-${TARGETARCH} AS base
ARG CMAKEVERSION
RUN curl -fsSL https://github.com/Kitware/CMake/releases/download/v${CMAKEVERSION}/cmake-${CMAKEVERSION}-linux-$(uname -m).tar.gz | tar xz -C /usr/local --strip-components 1
COPY CMakeLists.txt CMakePresets.json .
COPY ml/backend/ggml/ggml ml/backend/ggml/ggml
ENV LDFLAGS=-s
FROM base AS cpu
RUN dnf install -y gcc-toolset-11-gcc gcc-toolset-11-gcc-c++
ENV PATH=/opt/rh/gcc-toolset-11/root/usr/bin:$PATH
ARG PARALLEL
COPY CMakeLists.txt CMakePresets.json .
COPY ml/backend/ggml/ggml ml/backend/ggml/ggml
RUN --mount=type=cache,target=/root/.ccache \
cmake --preset 'CPU' \
&& cmake --build --parallel --preset 'CPU' \
&& cmake --install build --component CPU --strip --parallel 8
&& cmake --build --parallel ${PARALLEL} --preset 'CPU' \
&& cmake --install build --component CPU --strip --parallel ${PARALLEL}
FROM base AS cuda-11
ARG CUDA11VERSION=11.3
ARG CUDA11VERSION=11.8
RUN dnf install -y cuda-toolkit-${CUDA11VERSION//./-}
ENV PATH=/usr/local/cuda-11/bin:$PATH
ARG PARALLEL
COPY CMakeLists.txt CMakePresets.json .
COPY ml/backend/ggml/ggml ml/backend/ggml/ggml
RUN --mount=type=cache,target=/root/.ccache \
cmake --preset 'CUDA 11' \
&& cmake --build --parallel --preset 'CUDA 11' \
&& cmake --install build --component CUDA --strip --parallel 8
&& cmake --build --parallel ${PARALLEL} --preset 'CUDA 11' \
&& cmake --install build --component CUDA --strip --parallel ${PARALLEL}
FROM base AS cuda-12
ARG CUDA12VERSION=12.8
RUN dnf install -y cuda-toolkit-${CUDA12VERSION//./-}
ENV PATH=/usr/local/cuda-12/bin:$PATH
ARG PARALLEL
COPY CMakeLists.txt CMakePresets.json .
COPY ml/backend/ggml/ggml ml/backend/ggml/ggml
RUN --mount=type=cache,target=/root/.ccache \
cmake --preset 'CUDA 12' \
&& cmake --build --parallel --preset 'CUDA 12' \
&& cmake --install build --component CUDA --strip --parallel 8
&& cmake --build --parallel ${PARALLEL} --preset 'CUDA 12' \
&& cmake --install build --component CUDA --strip --parallel ${PARALLEL}
FROM base AS cuda-13
ARG CUDA13VERSION=13.0
RUN dnf install -y cuda-toolkit-${CUDA13VERSION//./-}
ENV PATH=/usr/local/cuda-13/bin:$PATH
ARG PARALLEL
COPY CMakeLists.txt CMakePresets.json .
COPY ml/backend/ggml/ggml ml/backend/ggml/ggml
RUN --mount=type=cache,target=/root/.ccache \
cmake --preset 'CUDA 13' \
&& cmake --build --parallel ${PARALLEL} --preset 'CUDA 13' \
&& cmake --install build --component CUDA --strip --parallel ${PARALLEL}
FROM base AS rocm-6
ENV PATH=/opt/rocm/hcc/bin:/opt/rocm/hip/bin:/opt/rocm/bin:/opt/rocm/hcc/bin:$PATH
ARG PARALLEL
COPY CMakeLists.txt CMakePresets.json .
COPY ml/backend/ggml/ggml ml/backend/ggml/ggml
RUN --mount=type=cache,target=/root/.ccache \
cmake --preset 'ROCm 6' \
&& cmake --build --parallel --preset 'ROCm 6' \
&& cmake --install build --component HIP --strip --parallel 8
&& cmake --build --parallel ${PARALLEL} --preset 'ROCm 6' \
&& cmake --install build --component HIP --strip --parallel ${PARALLEL}
RUN rm -f dist/lib/ollama/rocm/rocblas/library/*gfx90[06]*
FROM --platform=linux/arm64 nvcr.io/nvidia/l4t-jetpack:${JETPACK5VERSION} AS jetpack-5
ARG CMAKEVERSION
@@ -69,10 +107,11 @@ RUN apt-get update && apt-get install -y curl ccache \
&& curl -fsSL https://github.com/Kitware/CMake/releases/download/v${CMAKEVERSION}/cmake-${CMAKEVERSION}-linux-$(uname -m).tar.gz | tar xz -C /usr/local --strip-components 1
COPY CMakeLists.txt CMakePresets.json .
COPY ml/backend/ggml/ggml ml/backend/ggml/ggml
ARG PARALLEL
RUN --mount=type=cache,target=/root/.ccache \
cmake --preset 'JetPack 5' \
&& cmake --build --parallel --preset 'JetPack 5' \
&& cmake --install build --component CUDA --strip --parallel 8
&& cmake --build --parallel ${PARALLEL} --preset 'JetPack 5' \
&& cmake --install build --component CUDA --strip --parallel ${PARALLEL}
FROM --platform=linux/arm64 nvcr.io/nvidia/l4t-jetpack:${JETPACK6VERSION} AS jetpack-6
ARG CMAKEVERSION
@@ -80,10 +119,20 @@ RUN apt-get update && apt-get install -y curl ccache \
&& curl -fsSL https://github.com/Kitware/CMake/releases/download/v${CMAKEVERSION}/cmake-${CMAKEVERSION}-linux-$(uname -m).tar.gz | tar xz -C /usr/local --strip-components 1
COPY CMakeLists.txt CMakePresets.json .
COPY ml/backend/ggml/ggml ml/backend/ggml/ggml
ARG PARALLEL
RUN --mount=type=cache,target=/root/.ccache \
cmake --preset 'JetPack 6' \
&& cmake --build --parallel --preset 'JetPack 6' \
&& cmake --install build --component CUDA --strip --parallel 8
&& cmake --build --parallel ${PARALLEL} --preset 'JetPack 6' \
&& cmake --install build --component CUDA --strip --parallel ${PARALLEL}
FROM base AS vulkan
COPY CMakeLists.txt CMakePresets.json .
COPY ml/backend/ggml/ggml ml/backend/ggml/ggml
RUN --mount=type=cache,target=/root/.ccache \
cmake --preset 'Vulkan' \
&& cmake --build --parallel --preset 'Vulkan' \
&& cmake --install build --component Vulkan --strip --parallel 8
FROM base AS build
WORKDIR /go/src/github.com/ollama/ollama
@@ -94,29 +143,35 @@ RUN go mod download
COPY . .
ARG GOFLAGS="'-ldflags=-w -s'"
ENV CGO_ENABLED=1
ARG CGO_CFLAGS
ARG CGO_CXXFLAGS
RUN --mount=type=cache,target=/root/.cache/go-build \
go build -trimpath -buildmode=pie -o /bin/ollama .
FROM --platform=linux/amd64 scratch AS amd64
COPY --from=cuda-11 dist/lib/ollama/cuda_v11 /lib/ollama/cuda_v11
COPY --from=cuda-12 dist/lib/ollama/cuda_v12 /lib/ollama/cuda_v12
# COPY --from=cuda-11 dist/lib/ollama/ /lib/ollama/
COPY --from=cuda-12 dist/lib/ollama /lib/ollama/
COPY --from=cuda-13 dist/lib/ollama /lib/ollama/
COPY --from=vulkan dist/lib/ollama /lib/ollama/
FROM --platform=linux/arm64 scratch AS arm64
COPY --from=cuda-11 dist/lib/ollama/cuda_v11 /lib/ollama/cuda_v11
COPY --from=cuda-12 dist/lib/ollama/cuda_v12 /lib/ollama/cuda_v12
COPY --from=jetpack-5 dist/lib/ollama/cuda_v11 /lib/ollama/cuda_jetpack5
COPY --from=jetpack-6 dist/lib/ollama/cuda_v12 /lib/ollama/cuda_jetpack6
# COPY --from=cuda-11 dist/lib/ollama/ /lib/ollama/
COPY --from=cuda-12 dist/lib/ollama /lib/ollama/
COPY --from=cuda-13 dist/lib/ollama/ /lib/ollama/
COPY --from=jetpack-5 dist/lib/ollama/ /lib/ollama/
COPY --from=jetpack-6 dist/lib/ollama/ /lib/ollama/
FROM scratch AS rocm
COPY --from=rocm-6 dist/lib/ollama/rocm /lib/ollama/rocm
COPY --from=rocm-6 dist/lib/ollama /lib/ollama
FROM ${FLAVOR} AS archive
ARG VULKANVERSION
COPY --from=cpu dist/lib/ollama /lib/ollama
COPY --from=build /bin/ollama /bin/ollama
FROM ubuntu:20.04
FROM ubuntu:24.04
RUN apt-get update \
&& apt-get install -y ca-certificates \
&& apt-get install -y ca-certificates libvulkan1 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
COPY --from=archive /bin /usr/bin

View File

@@ -1,6 +1,6 @@
UPSTREAM=https://github.com/ggerganov/llama.cpp.git
UPSTREAM=https://github.com/ggml-org/llama.cpp.git
WORKDIR=llama/vendor
FETCH_HEAD=de4c07f93783a1a96456a44dc16b9db538ee1618
FETCH_HEAD=7f8ef50cce40e3e7e4526a3696cb45658190e69a
.PHONY: help
help:
@@ -12,7 +12,7 @@ help:
@echo " clean Clean local repository"
@echo
@echo "Example:"
@echo " make -f $(lastword $(MAKEFILE_LIST)) clean sync"
@echo " make -f $(lastword $(MAKEFILE_LIST)) clean apply-patches sync"
.PHONY: sync
sync: llama/build-info.cpp ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.metal
@@ -24,12 +24,12 @@ ml/backend/ggml/ggml/src/ggml-metal/ggml-metal-embed.metal: ml/backend/ggml/ggml
go generate ./$(@D)
.PHONY: llama/llama.cpp
llama/llama.cpp: llama/vendor/
rsync -arvzc -f "merge $@/.rsync-filter" $< $@
llama/llama.cpp: llama/vendor
rsync -arvzc --delete -f "include LICENSE" -f "merge $@/.rsync-filter" $(addprefix $<,/LICENSE /) $@
.PHONY: ml/backend/ggml/ggml
ml/backend/ggml/ggml: llama/vendor/ggml/
rsync -arvzc -f "merge $@/.rsync-filter" $< $@
ml/backend/ggml/ggml: llama/vendor
rsync -arvzc --delete -f "include LICENSE" -f "merge $@/.rsync-filter" $(addprefix $<,/LICENSE /ggml/) $@
PATCHES=$(wildcard llama/patches/*.patch)
PATCHED=$(join $(dir $(PATCHES)), $(addsuffix ed, $(addprefix ., $(notdir $(PATCHES)))))
@@ -39,7 +39,15 @@ PATCHED=$(join $(dir $(PATCHES)), $(addsuffix ed, $(addprefix ., $(notdir $(PATC
apply-patches: $(PATCHED)
llama/patches/.%.patched: llama/patches/%.patch
@if git -c user.name=nobody -c 'user.email=<>' -C $(WORKDIR) am -3 $(realpath $<); then touch $@; else git -C $(WORKDIR) am --abort; exit 1; fi
@if git -c user.name=nobody -c 'user.email=<>' -C $(WORKDIR) am -3 $(realpath $<); then \
touch $@; \
else \
echo "Patch failed. Resolve any conflicts then continue."; \
echo "1. Run 'git -C $(WORKDIR) am --continue'"; \
echo "2. Run 'make -f $(lastword $(MAKEFILE_LIST)) format-patches'"; \
echo "3. Run 'make -f $(lastword $(MAKEFILE_LIST)) clean apply-patches'"; \
exit 1; \
fi
.PHONY: checkout
checkout: $(WORKDIR)
@@ -60,4 +68,5 @@ format-patches: llama/patches
.PHONE: clean
clean: checkout
@git -C $(WORKDIR) am --abort || true
$(RM) llama/patches/.*.patched

View File

@@ -1,6 +1,6 @@
<div align="center">
  <a href="https://ollama.com">
<img alt="ollama" height="200px" src="https://github.com/ollama/ollama/assets/3325447/0d0b44e2-8f4a-4e99-9b52-a5c1c741c8f7">
<img alt="ollama" width="240" src="https://github.com/ollama/ollama/assets/3325447/0d0b44e2-8f4a-4e99-9b52-a5c1c741c8f7">
</a>
</div>
@@ -10,7 +10,7 @@ Get up and running with large language models.
### macOS
[Download](https://ollama.com/download/Ollama-darwin.zip)
[Download](https://ollama.com/download/Ollama.dmg)
### Windows
@@ -42,7 +42,7 @@ support lists. Explore its through self-build as guided on the wiki.
curl -fsSL https://ollama.com/install.sh | sh
```
[Manual install instructions](https://github.com/ollama/ollama/blob/main/docs/linux.md)
[Manual install instructions](https://docs.ollama.com/linux#manual-install)
[Configuring Environment Variables Tip For Unsupport GPUs](https://github.com/likelovewant/ollama-for-amd/wiki#troubleshooting-amd-gpu-support-in-linux)
@@ -62,10 +62,10 @@ The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `olla
## Quickstart
To run and chat with [Llama 3.2](https://ollama.com/library/llama3.2):
To run and chat with [Gemma 3](https://ollama.com/library/gemma3):
```shell
ollama run llama3.2
ollama run gemma3
```
## Model library
@@ -132,7 +132,7 @@ Ollama supports importing GGUF models in the Modelfile:
### Import from Safetensors
See the [guide](docs/import.md) on importing models for more information.
See the [guide](https://docs.ollama.com/import) on importing models for more information.
### Customize a prompt
@@ -165,7 +165,7 @@ ollama run mario
Hello! It's your friend Mario.
```
For more information on working with a Modelfile, see the [Modelfile](docs/modelfile.md) documentation.
For more information on working with a Modelfile, see the [Modelfile](https://docs.ollama.com/modelfile) documentation.
## CLI Reference
@@ -248,6 +248,18 @@ ollama ps
ollama stop llama3.2
```
### Generate embeddings from the CLI
```shell
ollama run embeddinggemma "Your text to embed"
```
You can also pipe text for scripted workflows:
```shell
echo "Your text to embed" | ollama run embeddinggemma
```
### Start Ollama
`ollama serve` is used when you want to start ollama without running the desktop application.
@@ -304,10 +316,12 @@ See the [API documentation](./docs/api.md) for all endpoints.
- [SwiftChat (macOS with ReactNative)](https://github.com/aws-samples/swift-chat)
- [Enchanted (macOS native)](https://github.com/AugustDev/enchanted)
- [Hollama](https://github.com/fmaclen/hollama)
- [Lollms-Webui](https://github.com/ParisNeo/lollms-webui)
- [Lollms WebUI (Single user)](https://github.com/ParisNeo/lollms-webui)
- [Lollms (Multi users)](https://github.com/ParisNeo/lollms)
- [LibreChat](https://github.com/danny-avila/LibreChat)
- [Bionic GPT](https://github.com/bionic-gpt/bionic-gpt)
- [HTML UI](https://github.com/rtcfirefly/ollama-ui)
- [AI-UI](https://github.com/bajahaw/ai-ui)
- [Saddle](https://github.com/jikkuatwork/saddle)
- [TagSpaces](https://www.tagspaces.org) (A platform for file-based apps, [utilizing Ollama](https://docs.tagspaces.org/ai/) for the generation of tags and descriptions)
- [Chatbot UI](https://github.com/ivanfioravanti/chatbot-ollama)
@@ -374,7 +388,8 @@ See the [API documentation](./docs/api.md) for all endpoints.
- [PartCAD](https://github.com/openvmp/partcad/) (CAD model generation with OpenSCAD and CadQuery)
- [Ollama4j Web UI](https://github.com/ollama4j/ollama4j-web-ui) - Java-based Web UI for Ollama built with Vaadin, Spring Boot, and Ollama4j
- [PyOllaMx](https://github.com/kspviswa/pyOllaMx) - macOS application capable of chatting with both Ollama and Apple MLX models.
- [Cline](https://github.com/cline/cline) - Formerly known as Claude Dev is a VSCode extension for multi-file/whole-repo coding
- [Cline](https://github.com/cline/cline) - Formerly known as Claude Dev is a VS Code extension for multi-file/whole-repo coding
- [Void](https://github.com/voideditor/void) (Open source AI code editor and Cursor alternative)
- [Cherry Studio](https://github.com/kangfenmao/cherry-studio) (Desktop client with Ollama support)
- [ConfiChat](https://github.com/1runeberg/confichat) (Lightweight, standalone, multi-platform, and privacy-focused LLM chat interface with optional encryption)
- [Archyve](https://github.com/nickthecook/archyve) (RAG-enabling document library)
@@ -382,7 +397,7 @@ See the [API documentation](./docs/api.md) for all endpoints.
- [Tkinter-based client](https://github.com/chyok/ollama-gui) (Python tkinter-based Client for Ollama)
- [LLMChat](https://github.com/trendy-design/llmchat) (Privacy focused, 100% local, intuitive all-in-one chat interface)
- [Local Multimodal AI Chat](https://github.com/Leon-Sander/Local-Multimodal-AI-Chat) (Ollama-based LLM Chat with support for multiple features, including PDF RAG, voice chat, image-based interactions, and integration with OpenAI.)
- [ARGO](https://github.com/xark-argo/argo) (Locally download and run Ollama and Huggingface models with RAG on Mac/Windows/Linux)
- [ARGO](https://github.com/xark-argo/argo) (Locally download and run Ollama and Huggingface models with RAG and deep research on Mac/Windows/Linux)
- [OrionChat](https://github.com/EliasPereirah/OrionChat) - OrionChat is a web interface for chatting with different AI providers
- [G1](https://github.com/bklieger-groq/g1) (Prototype of using prompting strategies to improve the LLM's reasoning through o1-like reasoning chains.)
- [Web management](https://github.com/lemonit-eric-mao/ollama-web-management) (Web management page)
@@ -406,7 +421,7 @@ See the [API documentation](./docs/api.md) for all endpoints.
- [aidful-ollama-model-delete](https://github.com/AidfulAI/aidful-ollama-model-delete) (User interface for simplified model cleanup)
- [Perplexica](https://github.com/ItzCrazyKns/Perplexica) (An AI-powered search engine & an open-source alternative to Perplexity AI)
- [Ollama Chat WebUI for Docker ](https://github.com/oslook/ollama-webui) (Support for local docker deployment, lightweight ollama webui)
- [AI Toolkit for Visual Studio Code](https://aka.ms/ai-tooklit/ollama-docs) (Microsoft-official VSCode extension to chat, test, evaluate models with Ollama support, and use them in your AI applications.)
- [AI Toolkit for Visual Studio Code](https://aka.ms/ai-tooklit/ollama-docs) (Microsoft-official VS Code extension to chat, test, evaluate models with Ollama support, and use them in your AI applications.)
- [MinimalNextOllamaChat](https://github.com/anilkay/MinimalNextOllamaChat) (Minimal Web UI for Chat and Model Control)
- [Chipper](https://github.com/TilmanGriesel/chipper) AI interface for tinkerers (Ollama, Haystack RAG, Python)
- [ChibiChat](https://github.com/CosmicEventHorizon/ChibiChat) (Kotlin-based Android app to chat with Ollama and Koboldcpp API endpoints)
@@ -429,6 +444,16 @@ See the [API documentation](./docs/api.md) for all endpoints.
- [Lumina](https://github.com/cushydigit/lumina.git) (A lightweight, minimal React.js frontend for interacting with Ollama servers)
- [Tiny Notepad](https://pypi.org/project/tiny-notepad) (A lightweight, notepad-like interface to chat with ollama available on PyPI)
- [macLlama (macOS native)](https://github.com/hellotunamayo/macLlama) (A native macOS GUI application for interacting with Ollama models, featuring a chat interface.)
- [GPTranslate](https://github.com/philberndt/GPTranslate) (A fast and lightweight, AI powered desktop translation application written with Rust and Tauri. Features real-time translation with OpenAI/Azure/Ollama.)
- [ollama launcher](https://github.com/NGC13009/ollama-launcher) (A launcher for Ollama, aiming to provide users with convenient functions such as ollama server launching, management, or configuration.)
- [ai-hub](https://github.com/Aj-Seven/ai-hub) (AI Hub supports multiple models via API keys and Chat support via Ollama API.)
- [Mayan EDMS](https://gitlab.com/mayan-edms/mayan-edms) (Open source document management system to organize, tag, search, and automate your files with powerful Ollama driven workflows.)
- [Serene Pub](https://github.com/doolijb/serene-pub) (Beginner friendly, open source AI Roleplaying App for Windows, Mac OS and Linux. Search, download and use models with Ollama all inside the app.)
- [Andes](https://github.com/aqerd/andes) (A Visual Studio Code extension that provides a local UI interface for Ollama models)
- [KDeps](https://github.com/kdeps/kdeps) (Kdeps is an offline-first AI framework for building Dockerized full-stack AI applications declaratively using Apple PKL and integrates APIs with Ollama on the backend.)
- [Clueless](https://github.com/KashyapTan/clueless) (Open Source & Local Cluely: A desktop application LLM assistant to help you talk to anything on your screen using locally served Ollama models. Also undetectable to screenshare)
- [ollama-co2](https://github.com/carbonatedWaterOrg/ollama-co2) (FastAPI web interface for monitoring and managing local and remote Ollama servers with real-time model monitoring and concurrent downloads)
- [Hillnote](https://hillnote.com) (A Markdown-first workspace designed to supercharge your AI workflow. Create documents ready to integrate with Claude, ChatGPT, Gemini, Cursor, and more - all while keeping your work on your device.)
### Cloud
@@ -436,6 +461,10 @@ See the [API documentation](./docs/api.md) for all endpoints.
- [Fly.io](https://fly.io/docs/python/do-more/add-ollama/)
- [Koyeb](https://www.koyeb.com/deploy/ollama)
### Tutorial
- [handy-ollama](https://github.com/datawhalechina/handy-ollama) (Chinese Tutorial for Ollama by [Datawhale ](https://github.com/datawhalechina) - China's Largest Open Source AI Learning Community)
### Terminal
- [oterm](https://github.com/ggozad/oterm)
@@ -473,6 +502,10 @@ See the [API documentation](./docs/api.md) for all endpoints.
- [orca-cli](https://github.com/molbal/orca-cli) Ollama Registry CLI Application - Browse, pull, and download models from Ollama Registry in your terminal.
- [GGUF-to-Ollama](https://github.com/jonathanhecl/gguf-to-ollama) - Importing GGUF to Ollama made easy (multiplatform)
- [AWS-Strands-With-Ollama](https://github.com/rapidarchitect/ollama_strands) - AWS Strands Agents with Ollama Examples
- [ollama-multirun](https://github.com/attogram/ollama-multirun) - A bash shell script to run a single prompt against any or all of your locally installed ollama models, saving the output and performance statistics as easily navigable web pages. ([Demo](https://attogram.github.io/ai_test_zone/))
- [ollama-bash-toolshed](https://github.com/attogram/ollama-bash-toolshed) - Bash scripts to chat with tool using models. Add new tools to your shed with ease. Runs on Ollama.
- [hle-eval-ollama](https://github.com/mags0ft/hle-eval-ollama) - Runs benchmarks like "Humanity's Last Exam" (HLE) on your favorite local Ollama models and evaluates the quality of their responses
- [VT Code](https://github.com/vinhnx/vtcode) - VT Code is a Rust-based terminal coding agent with semantic code intelligence via Tree-sitter. Ollama integration for running local/cloud models with configurable endpoints.
### Apple Vision Pro
@@ -503,6 +536,7 @@ See the [API documentation](./docs/api.md) for all endpoints.
- [Firebase Genkit](https://firebase.google.com/docs/genkit/plugins/ollama)
- [crewAI](https://github.com/crewAIInc/crewAI)
- [Yacana](https://remembersoftwares.github.io/yacana/) (User-friendly multi-agent framework for brainstorming and executing predetermined flows with built-in tool integration)
- [Strands Agents](https://github.com/strands-agents/sdk-python) (A model-driven approach to building AI agents in just a few lines of code)
- [Spring AI](https://github.com/spring-projects/spring-ai) with [reference](https://docs.spring.io/spring-ai/reference/api/chat/ollama-chat.html) and [example](https://github.com/tzolov/ollama-tools)
- [LangChainGo](https://github.com/tmc/langchaingo/) with [example](https://github.com/tmc/langchaingo/tree/main/examples/ollama-completion-example)
- [LangChain4j](https://github.com/langchain4j/langchain4j) with [example](https://github.com/langchain4j/langchain4j-examples/tree/main/ollama-examples/src/main/java)
@@ -553,6 +587,11 @@ See the [API documentation](./docs/api.md) for all endpoints.
- [Nichey](https://github.com/goodreasonai/nichey) is a Python package for generating custom wikis for your research topic
- [Ollama for D](https://github.com/kassane/ollama-d)
- [OllamaPlusPlus](https://github.com/HardCodeDev777/OllamaPlusPlus) (Very simple C++ library for Ollama)
- [any-llm](https://github.com/mozilla-ai/any-llm) (A single interface to use different llm providers by [mozilla.ai](https://www.mozilla.ai/))
- [any-agent](https://github.com/mozilla-ai/any-agent) (A single interface to use and evaluate different agent frameworks by [mozilla.ai](https://www.mozilla.ai/))
- [Neuro SAN](https://github.com/cognizant-ai-lab/neuro-san-studio) (Data-driven multi-agent orchestration framework) with [example](https://github.com/cognizant-ai-lab/neuro-san-studio/blob/main/docs/user_guide.md#ollama)
- [achatbot-go](https://github.com/ai-bot-pro/achatbot-go) a multimodal(text/audio/image) chatbot.
- [Ollama Bash Lib](https://github.com/attogram/ollama-bash-lib) - A Bash Library for Ollama. Run LLM prompts straight from your shell, and more
### Mobile
@@ -598,27 +637,33 @@ See the [API documentation](./docs/api.md) for all endpoints.
- [Terraform AWS Ollama & Open WebUI](https://github.com/xuyangbocn/terraform-aws-self-host-llm) (A Terraform module to deploy on AWS a ready-to-use Ollama service, together with its front-end Open WebUI service.)
- [node-red-contrib-ollama](https://github.com/jakubburkiewicz/node-red-contrib-ollama)
- [Local AI Helper](https://github.com/ivostoykov/localAI) (Chrome and Firefox extensions that enable interactions with the active tab and customisable API endpoints. Includes secure storage for user prompts.)
- [vnc-lm](https://github.com/jake83741/vnc-lm) (Discord bot for messaging with LLMs through Ollama and LiteLLM. Seamlessly move between local and flagship models.)
- [LSP-AI](https://github.com/SilasMarvin/lsp-ai) (Open-source language server for AI-powered functionality)
- [QodeAssist](https://github.com/Palm1r/QodeAssist) (AI-powered coding assistant plugin for Qt Creator)
- [Obsidian Quiz Generator plugin](https://github.com/ECuiDev/obsidian-quiz-generator)
- [AI Summmary Helper plugin](https://github.com/philffm/ai-summary-helper)
- [AI Summary Helper plugin](https://github.com/philffm/ai-summary-helper)
- [TextCraft](https://github.com/suncloudsmoon/TextCraft) (Copilot in Word alternative using Ollama)
- [Alfred Ollama](https://github.com/zeitlings/alfred-ollama) (Alfred Workflow)
- [TextLLaMA](https://github.com/adarshM84/TextLLaMA) A Chrome Extension that helps you write emails, correct grammar, and translate into any language
- [Simple-Discord-AI](https://github.com/zyphixor/simple-discord-ai)
- [LLM Telegram Bot](https://github.com/innightwolfsleep/llm_telegram_bot) (telegram bot, primary for RP. Oobabooga-like buttons, [A1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui) API integration e.t.c)
- [mcp-llm](https://github.com/sammcj/mcp-llm) (MCP Server to allow LLMs to call other LLMs)
- [UnityCodeLama](https://github.com/HardCodeDev777/UnityCodeLama) (Unity Edtior tool to analyze scripts via Ollama)
- [SimpleOllamaUnity](https://github.com/HardCodeDev777/SimpleOllamaUnity) (Unity Engine extension for communicating with Ollama in a few lines of code. Also works at runtime)
- [UnityCodeLama](https://github.com/HardCodeDev777/UnityCodeLama) (Unity Editor tool to analyze scripts via Ollama)
- [NativeMind](https://github.com/NativeMindBrowser/NativeMindExtension) (Private, on-device AI Assistant, no cloud dependencies)
- [GMAI - Gradle Managed AI](https://gmai.premex.se/) (Gradle plugin for automated Ollama lifecycle management during build phases)
- [NOMYO Router](https://github.com/nomyo-ai/nomyo-router) (A transparent Ollama proxy with model deployment aware routing which auto-manages multiple Ollama instances in a given network)
### Supported backends
- [llama.cpp](https://github.com/ggerganov/llama.cpp) project founded by Georgi Gerganov.
- [llama.cpp](https://github.com/ggml-org/llama.cpp) project founded by Georgi Gerganov.
### Observability
- [Opik](https://www.comet.com/docs/opik/cookbook/ollama) is an open-source platform to debug, evaluate, and monitor your LLM applications, RAG systems, and agentic workflows with comprehensive tracing, automated evaluations, and production-ready dashboards. Opik supports native intergration to Ollama.
- [Opik](https://www.comet.com/docs/opik/cookbook/ollama) is an open-source platform to debug, evaluate, and monitor your LLM applications, RAG systems, and agentic workflows with comprehensive tracing, automated evaluations, and production-ready dashboards. Opik supports native integration to Ollama.
- [Lunary](https://lunary.ai/docs/integrations/ollama) is the leading open-source LLM observability platform. It provides a variety of enterprise-grade features such as real-time analytics, prompt templates management, PII masking, and comprehensive agent tracing.
- [OpenLIT](https://github.com/openlit/openlit) is an OpenTelemetry-native tool for monitoring Ollama Applications & GPUs using traces and metrics.
- [HoneyHive](https://docs.honeyhive.ai/integrations/ollama) is an AI observability and evaluation platform for AI agents. Use HoneyHive to evaluate agent performance, interrogate failures, and monitor quality in production.
- [Langfuse](https://langfuse.com/docs/integrations/ollama) is an open source LLM observability platform that enables teams to collaboratively monitor, evaluate and debug AI applications.
- [MLflow Tracing](https://mlflow.org/docs/latest/llms/tracing/index.html#automatic-tracing) is an open source LLM observability tool with a convenient API to log and visualize traces, making it easy to debug and evaluate GenAI applications.
### Security
- [Ollama Fortress](https://github.com/ParisNeo/ollama_proxy_server)

View File

@@ -14,7 +14,7 @@ Please include the following details in your report:
## Security best practices
While the maintainer team does their best to secure Ollama, users are encouraged to implement their own security best practices, such as:
While the maintainer team does its best to secure Ollama, users are encouraged to implement their own security best practices, such as:
- Regularly updating to the latest version of Ollama
- Securing access to hosted instances of Ollama

View File

@@ -45,6 +45,12 @@ func checkError(resp *http.Response, body []byte) error {
return nil
}
if resp.StatusCode == http.StatusUnauthorized {
authError := AuthorizationError{StatusCode: resp.StatusCode}
json.Unmarshal(body, &authError)
return authError
}
apiError := StatusError{StatusCode: resp.StatusCode}
err := json.Unmarshal(body, &apiError)
@@ -214,19 +220,29 @@ func (c *Client) stream(ctx context.Context, method, path string, data any, fn f
scanner.Buffer(scanBuf, maxBufferSize)
for scanner.Scan() {
var errorResponse struct {
Error string `json:"error,omitempty"`
Error string `json:"error,omitempty"`
SigninURL string `json:"signin_url,omitempty"`
}
bts := scanner.Bytes()
if err := json.Unmarshal(bts, &errorResponse); err != nil {
return fmt.Errorf("unmarshal: %w", err)
if response.StatusCode >= http.StatusBadRequest {
return StatusError{
StatusCode: response.StatusCode,
Status: response.Status,
ErrorMessage: string(bts),
}
}
return errors.New(string(bts))
}
if errorResponse.Error != "" {
return errors.New(errorResponse.Error)
}
if response.StatusCode >= http.StatusBadRequest {
if response.StatusCode == http.StatusUnauthorized {
return AuthorizationError{
StatusCode: response.StatusCode,
Status: response.Status,
SigninURL: errorResponse.SigninURL,
}
} else if response.StatusCode >= http.StatusBadRequest {
return StatusError{
StatusCode: response.StatusCode,
Status: response.Status,
@@ -234,6 +250,10 @@ func (c *Client) stream(ctx context.Context, method, path string, data any, fn f
}
}
if errorResponse.Error != "" {
return errors.New(errorResponse.Error)
}
if err := fn(bts); err != nil {
return err
}
@@ -428,3 +448,21 @@ func (c *Client) Version(ctx context.Context) (string, error) {
return version.Version, nil
}
// Signout will signout a client for a local ollama server.
func (c *Client) Signout(ctx context.Context) error {
return c.do(ctx, http.MethodPost, "/api/signout", nil, nil)
}
// Disconnect will disconnect an ollama instance from ollama.com.
func (c *Client) Disconnect(ctx context.Context, encodedKey string) error {
return c.do(ctx, http.MethodDelete, fmt.Sprintf("/api/user/keys/%s", encodedKey), nil, nil)
}
func (c *Client) Whoami(ctx context.Context) (*UserResponse, error) {
var resp UserResponse
if err := c.do(ctx, http.MethodPost, "/api/me", nil, &resp); err != nil {
return nil, err
}
return &resp, nil
}

View File

@@ -55,6 +55,7 @@ func TestClientFromEnvironment(t *testing.T) {
type testError struct {
message string
statusCode int
raw bool // if true, write message as-is instead of JSON encoding
}
func (e testError) Error() string {
@@ -89,6 +90,16 @@ func TestClientStream(t *testing.T) {
},
wantErr: "mid-stream error",
},
{
name: "http status error takes precedence over general error",
responses: []any{
testError{
message: "custom error message",
statusCode: http.StatusInternalServerError,
},
},
wantErr: "500",
},
{
name: "successful stream completion",
responses: []any{
@@ -101,6 +112,20 @@ func TestClientStream(t *testing.T) {
},
},
},
{
name: "plain text error response",
responses: []any{
"internal server error",
},
wantErr: "internal server error",
},
{
name: "HTML error page",
responses: []any{
"<html><body>404 Not Found</body></html>",
},
wantErr: "404 Not Found",
},
}
for _, tc := range testCases {
@@ -125,6 +150,12 @@ func TestClientStream(t *testing.T) {
return
}
if str, ok := resp.(string); ok {
fmt.Fprintln(w, str)
flusher.Flush()
continue
}
if err := json.NewEncoder(w).Encode(resp); err != nil {
t.Fatalf("failed to encode response: %v", err)
}
@@ -163,9 +194,10 @@ func TestClientStream(t *testing.T) {
func TestClientDo(t *testing.T) {
testCases := []struct {
name string
response any
wantErr string
name string
response any
wantErr string
wantStatusCode int
}{
{
name: "immediate error response",
@@ -173,7 +205,8 @@ func TestClientDo(t *testing.T) {
message: "test error message",
statusCode: http.StatusBadRequest,
},
wantErr: "test error message",
wantErr: "test error message",
wantStatusCode: http.StatusBadRequest,
},
{
name: "server error response",
@@ -181,7 +214,8 @@ func TestClientDo(t *testing.T) {
message: "internal error",
statusCode: http.StatusInternalServerError,
},
wantErr: "internal error",
wantErr: "internal error",
wantStatusCode: http.StatusInternalServerError,
},
{
name: "successful response",
@@ -193,6 +227,26 @@ func TestClientDo(t *testing.T) {
Success: true,
},
},
{
name: "plain text error response",
response: testError{
message: "internal server error",
statusCode: http.StatusInternalServerError,
raw: true,
},
wantErr: "internal server error",
wantStatusCode: http.StatusInternalServerError,
},
{
name: "HTML error page",
response: testError{
message: "<html><body>404 Not Found</body></html>",
statusCode: http.StatusNotFound,
raw: true,
},
wantErr: "<html><body>404 Not Found</body></html>",
wantStatusCode: http.StatusNotFound,
},
}
for _, tc := range testCases {
@@ -200,11 +254,16 @@ func TestClientDo(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if errResp, ok := tc.response.(testError); ok {
w.WriteHeader(errResp.statusCode)
err := json.NewEncoder(w).Encode(map[string]string{
"error": errResp.message,
})
if err != nil {
t.Fatal("failed to encode error response:", err)
if !errResp.raw {
err := json.NewEncoder(w).Encode(map[string]string{
"error": errResp.message,
})
if err != nil {
t.Fatal("failed to encode error response:", err)
}
} else {
// Write raw message (simulates non-JSON error responses)
fmt.Fprint(w, errResp.message)
}
return
}
@@ -231,6 +290,15 @@ func TestClientDo(t *testing.T) {
if err.Error() != tc.wantErr {
t.Errorf("error message mismatch: got %q, want %q", err.Error(), tc.wantErr)
}
if tc.wantStatusCode != 0 {
if statusErr, ok := err.(StatusError); ok {
if statusErr.StatusCode != tc.wantStatusCode {
t.Errorf("status code mismatch: got %d, want %d", statusErr.StatusCode, tc.wantStatusCode)
}
} else {
t.Errorf("expected StatusError, got %T", err)
}
}
return
}

View File

@@ -15,19 +15,19 @@ func main() {
}
messages := []api.Message{
api.Message{
{
Role: "system",
Content: "Provide very brief, concise responses",
},
api.Message{
{
Role: "user",
Content: "Name some unusual animals",
},
api.Message{
{
Role: "assistant",
Content: "Monotreme, platypus, echidna",
},
api.Message{
{
Role: "user",
Content: "which of these is the most dangerous?",
},

View File

@@ -11,6 +11,8 @@ import (
"strings"
"time"
"github.com/google/uuid"
"github.com/ollama/ollama/envconfig"
"github.com/ollama/ollama/types/model"
)
@@ -36,6 +38,19 @@ func (e StatusError) Error() string {
}
}
type AuthorizationError struct {
StatusCode int
Status string
SigninURL string `json:"signin_url"`
}
func (e AuthorizationError) Error() string {
if e.Status != "" {
return e.Status
}
return "something went wrong, please see the ollama server logs for details"
}
// ImageData represents the raw binary data of an image file.
type ImageData []byte
@@ -85,10 +100,31 @@ type GenerateRequest struct {
Options map[string]any `json:"options"`
// Think controls whether thinking/reasoning models will think before
// responding. Needs to be a pointer so we can distinguish between false
// responding. Can be a boolean (true/false) or a string ("high", "medium", "low")
// for supported models. Needs to be a pointer so we can distinguish between false
// (request that thinking _not_ be used) and unset (use the old behavior
// before this option was introduced)
Think *bool `json:"think,omitempty"`
Think *ThinkValue `json:"think,omitempty"`
// Truncate is a boolean that, when set to true, truncates the chat history messages
// if the rendered prompt exceeds the context length limit.
Truncate *bool `json:"truncate,omitempty"`
// Shift is a boolean that, when set to true, shifts the chat history
// when hitting the context length limit instead of erroring.
Shift *bool `json:"shift,omitempty"`
// DebugRenderOnly is a debug option that, when set to true, returns the rendered
// template instead of calling the model.
DebugRenderOnly bool `json:"_debug_render_only,omitempty"`
// Logprobs specifies whether to return log probabilities of the output tokens.
Logprobs bool `json:"logprobs,omitempty"`
// TopLogprobs is the number of most likely tokens to return at each token position,
// each with an associated log probability. Only applies when Logprobs is true.
// Valid values are 0-20. Default is 0 (only return the selected token's logprob).
TopLogprobs int `json:"top_logprobs,omitempty"`
}
// ChatRequest describes a request sent by [Client.Chat].
@@ -116,8 +152,29 @@ type ChatRequest struct {
Options map[string]any `json:"options"`
// Think controls whether thinking/reasoning models will think before
// responding
Think *bool `json:"think,omitempty"`
// responding. Can be a boolean (true/false) or a string ("high", "medium", "low")
// for supported models.
Think *ThinkValue `json:"think,omitempty"`
// Truncate is a boolean that, when set to true, truncates the chat history messages
// if the rendered prompt exceeds the context length limit.
Truncate *bool `json:"truncate,omitempty"`
// Shift is a boolean that, when set to true, shifts the chat history
// when hitting the context length limit instead of erroring.
Shift *bool `json:"shift,omitempty"`
// DebugRenderOnly is a debug option that, when set to true, returns the rendered
// template instead of calling the model.
DebugRenderOnly bool `json:"_debug_render_only,omitempty"`
// Logprobs specifies whether to return log probabilities of the output tokens.
Logprobs bool `json:"logprobs,omitempty"`
// TopLogprobs is the number of most likely tokens to return at each token position,
// each with an associated log probability. Only applies when Logprobs is true.
// Valid values are 0-20. Default is 0 (only return the selected token's logprob).
TopLogprobs int `json:"top_logprobs,omitempty"`
}
type Tools []Tool
@@ -140,9 +197,11 @@ type Message struct {
Content string `json:"content"`
// Thinking contains the text that was inside thinking tags in the
// original model output when ChatRequest.Think is enabled.
Thinking string `json:"thinking,omitempty"`
Images []ImageData `json:"images,omitempty"`
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
Thinking string `json:"thinking,omitempty"`
Images []ImageData `json:"images,omitempty"`
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
ToolName string `json:"tool_name,omitempty"`
ToolCallID string `json:"tool_call_id,omitempty"`
}
func (m *Message) UnmarshalJSON(b []byte) error {
@@ -158,11 +217,12 @@ func (m *Message) UnmarshalJSON(b []byte) error {
}
type ToolCall struct {
ID string `json:"id,omitempty"`
Function ToolCallFunction `json:"function"`
}
type ToolCallFunction struct {
Index int `json:"index,omitempty"`
Index int `json:"index"`
Name string `json:"name"`
Arguments ToolCallFunctionArguments `json:"arguments"`
}
@@ -222,21 +282,76 @@ func (pt PropertyType) String() string {
return fmt.Sprintf("%v", []string(pt))
}
type ToolProperty struct {
AnyOf []ToolProperty `json:"anyOf,omitempty"`
Type PropertyType `json:"type,omitempty"`
Items any `json:"items,omitempty"`
Description string `json:"description,omitempty"`
Enum []any `json:"enum,omitempty"`
}
// ToTypeScriptType converts a ToolProperty to a TypeScript type string
func (tp ToolProperty) ToTypeScriptType() string {
if len(tp.AnyOf) > 0 {
var types []string
for _, anyOf := range tp.AnyOf {
types = append(types, anyOf.ToTypeScriptType())
}
return strings.Join(types, " | ")
}
if len(tp.Type) == 0 {
return "any"
}
if len(tp.Type) == 1 {
return mapToTypeScriptType(tp.Type[0])
}
var types []string
for _, t := range tp.Type {
types = append(types, mapToTypeScriptType(t))
}
return strings.Join(types, " | ")
}
// mapToTypeScriptType maps JSON Schema types to TypeScript types
func mapToTypeScriptType(jsonType string) string {
switch jsonType {
case "string":
return "string"
case "number", "integer":
return "number"
case "boolean":
return "boolean"
case "array":
return "any[]"
case "object":
return "Record<string, any>"
case "null":
return "null"
default:
return "any"
}
}
type ToolFunctionParameters struct {
Type string `json:"type"`
Defs any `json:"$defs,omitempty"`
Items any `json:"items,omitempty"`
Required []string `json:"required,omitempty"`
Properties map[string]ToolProperty `json:"properties"`
}
func (t *ToolFunctionParameters) String() string {
bts, _ := json.Marshal(t)
return string(bts)
}
type ToolFunction struct {
Name string `json:"name"`
Description string `json:"description"`
Parameters struct {
Type string `json:"type"`
Defs any `json:"$defs,omitempty"`
Items any `json:"items,omitempty"`
Required []string `json:"required"`
Properties map[string]struct {
Type PropertyType `json:"type"`
Items any `json:"items,omitempty"`
Description string `json:"description"`
Enum []any `json:"enum,omitempty"`
} `json:"properties"`
} `json:"parameters"`
Name string `json:"name"`
Description string `json:"description,omitempty"`
Parameters ToolFunctionParameters `json:"parameters"`
}
func (t *ToolFunction) String() string {
@@ -244,19 +359,66 @@ func (t *ToolFunction) String() string {
return string(bts)
}
// TokenLogprob represents log probability information for a single token alternative.
type TokenLogprob struct {
// Token is the text representation of the token.
Token string `json:"token"`
// Logprob is the log probability of this token.
Logprob float64 `json:"logprob"`
// Bytes contains the raw byte representation of the token
Bytes []int `json:"bytes,omitempty"`
}
// Logprob contains log probability information for a generated token.
type Logprob struct {
TokenLogprob
// TopLogprobs contains the most likely tokens and their log probabilities
// at this position, if requested via TopLogprobs parameter.
TopLogprobs []TokenLogprob `json:"top_logprobs,omitempty"`
}
// ChatResponse is the response returned by [Client.Chat]. Its fields are
// similar to [GenerateResponse].
type ChatResponse struct {
Model string `json:"model"`
CreatedAt time.Time `json:"created_at"`
Message Message `json:"message"`
DoneReason string `json:"done_reason,omitempty"`
// Model is the model name that generated the response.
Model string `json:"model"`
// RemoteModel is the name of the upstream model that generated the response.
RemoteModel string `json:"remote_model,omitempty"`
// RemoteHost is the URL of the upstream Ollama host that generated the response.
RemoteHost string `json:"remote_host,omitempty"`
// CreatedAt is the timestamp of the response.
CreatedAt time.Time `json:"created_at"`
// Message contains the message or part of a message from the model.
Message Message `json:"message"`
// Done specifies if the response is complete.
Done bool `json:"done"`
// DoneReason is the reason the model stopped generating text.
DoneReason string `json:"done_reason,omitempty"`
DebugInfo *DebugInfo `json:"_debug_info,omitempty"`
// Logprobs contains log probability information for the generated tokens,
// if requested via the Logprobs parameter.
Logprobs []Logprob `json:"logprobs,omitempty"`
Metrics
}
// DebugInfo contains debug information for template rendering
type DebugInfo struct {
RenderedTemplate string `json:"rendered_template"`
ImageCount int `json:"image_count,omitempty"`
}
type Metrics struct {
TotalDuration time.Duration `json:"total_duration,omitempty"`
LoadDuration time.Duration `json:"load_duration,omitempty"`
@@ -309,8 +471,12 @@ type EmbedRequest struct {
// this request.
KeepAlive *Duration `json:"keep_alive,omitempty"`
// Truncate truncates the input to fit the model's max sequence length.
Truncate *bool `json:"truncate,omitempty"`
// Dimensions truncates the output embedding to the specified dimension.
Dimensions int `json:"dimensions,omitempty"`
// Options lists model-specific options.
Options map[string]any `json:"options"`
}
@@ -348,18 +514,47 @@ type EmbeddingResponse struct {
// CreateRequest is the request passed to [Client.Create].
type CreateRequest struct {
Model string `json:"model"`
Stream *bool `json:"stream,omitempty"`
// Model is the model name to create.
Model string `json:"model"`
// Stream specifies whether the response is streaming; it is true by default.
Stream *bool `json:"stream,omitempty"`
// Quantize is the quantization format for the model; leave blank to not change the quantization level.
Quantize string `json:"quantize,omitempty"`
From string `json:"from,omitempty"`
Files map[string]string `json:"files,omitempty"`
Adapters map[string]string `json:"adapters,omitempty"`
Template string `json:"template,omitempty"`
License any `json:"license,omitempty"`
System string `json:"system,omitempty"`
Parameters map[string]any `json:"parameters,omitempty"`
Messages []Message `json:"messages,omitempty"`
// From is the name of the model or file to use as the source.
From string `json:"from,omitempty"`
// RemoteHost is the URL of the upstream ollama API for the model (if any).
RemoteHost string `json:"remote_host,omitempty"`
// Files is a map of files include when creating the model.
Files map[string]string `json:"files,omitempty"`
// Adapters is a map of LoRA adapters to include when creating the model.
Adapters map[string]string `json:"adapters,omitempty"`
// Template is the template used when constructing a request to the model.
Template string `json:"template,omitempty"`
// License is a string or list of strings for licenses.
License any `json:"license,omitempty"`
// System is the system prompt for the model.
System string `json:"system,omitempty"`
// Parameters is a map of hyper-parameters which are applied to the model.
Parameters map[string]any `json:"parameters,omitempty"`
// Messages is a list of messages added to the model before chat and generation requests.
Messages []Message `json:"messages,omitempty"`
Renderer string `json:"renderer,omitempty"`
Parser string `json:"parser,omitempty"`
// Info is a map of additional information for the model
Info map[string]any `json:"info,omitempty"`
// Deprecated: set the model name with Model instead
Name string `json:"name"`
@@ -397,8 +592,12 @@ type ShowResponse struct {
Parameters string `json:"parameters,omitempty"`
Template string `json:"template,omitempty"`
System string `json:"system,omitempty"`
Renderer string `json:"renderer,omitempty"`
Parser string `json:"parser,omitempty"`
Details ModelDetails `json:"details,omitempty"`
Messages []Message `json:"messages,omitempty"`
RemoteModel string `json:"remote_model,omitempty"`
RemoteHost string `json:"remote_host,omitempty"`
ModelInfo map[string]any `json:"model_info,omitempty"`
ProjectorInfo map[string]any `json:"projector_info,omitempty"`
Tensors []Tensor `json:"tensors,omitempty"`
@@ -457,23 +656,26 @@ type ProcessResponse struct {
// ListModelResponse is a single model description in [ListResponse].
type ListModelResponse struct {
Name string `json:"name"`
Model string `json:"model"`
ModifiedAt time.Time `json:"modified_at"`
Size int64 `json:"size"`
Digest string `json:"digest"`
Details ModelDetails `json:"details,omitempty"`
Name string `json:"name"`
Model string `json:"model"`
RemoteModel string `json:"remote_model,omitempty"`
RemoteHost string `json:"remote_host,omitempty"`
ModifiedAt time.Time `json:"modified_at"`
Size int64 `json:"size"`
Digest string `json:"digest"`
Details ModelDetails `json:"details,omitempty"`
}
// ProcessModelResponse is a single model description in [ProcessResponse].
type ProcessModelResponse struct {
Name string `json:"name"`
Model string `json:"model"`
Size int64 `json:"size"`
Digest string `json:"digest"`
Details ModelDetails `json:"details,omitempty"`
ExpiresAt time.Time `json:"expires_at"`
SizeVRAM int64 `json:"size_vram"`
Name string `json:"name"`
Model string `json:"model"`
Size int64 `json:"size"`
Digest string `json:"digest"`
Details ModelDetails `json:"details,omitempty"`
ExpiresAt time.Time `json:"expires_at"`
SizeVRAM int64 `json:"size_vram"`
ContextLength int `json:"context_length"`
}
type TokenResponse struct {
@@ -485,6 +687,12 @@ type GenerateResponse struct {
// Model is the model name that generated the response.
Model string `json:"model"`
// RemoteModel is the name of the upstream model that generated the response.
RemoteModel string `json:"remote_model,omitempty"`
// RemoteHost is the URL of the upstream Ollama host that generated the response.
RemoteHost string `json:"remote_host,omitempty"`
// CreatedAt is the timestamp of the response.
CreatedAt time.Time `json:"created_at"`
@@ -506,6 +714,14 @@ type GenerateResponse struct {
Context []int `json:"context,omitempty"`
Metrics
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
DebugInfo *DebugInfo `json:"_debug_info,omitempty"`
// Logprobs contains log probability information for the generated tokens,
// if requested via the Logprobs parameter.
Logprobs []Logprob `json:"logprobs,omitempty"`
}
// ModelDetails provides details about a model.
@@ -518,6 +734,18 @@ type ModelDetails struct {
QuantizationLevel string `json:"quantization_level"`
}
// UserResponse provides information about a user.
type UserResponse struct {
ID uuid.UUID `json:"id"`
Email string `json:"email"`
Name string `json:"name"`
Bio string `json:"bio,omitempty"`
AvatarURL string `json:"avatarurl,omitempty"`
FirstName string `json:"firstname,omitempty"`
LastName string `json:"lastname,omitempty"`
Plan string `json:"plan,omitempty"`
}
// Tensor describes the metadata for a given tensor.
type Tensor struct {
Name string `json:"name"`
@@ -675,6 +903,113 @@ func DefaultOptions() Options {
}
}
// ThinkValue represents a value that can be a boolean or a string ("high", "medium", "low")
type ThinkValue struct {
// Value can be a bool or string
Value interface{}
}
// IsValid checks if the ThinkValue is valid
func (t *ThinkValue) IsValid() bool {
if t == nil || t.Value == nil {
return true // nil is valid (means not set)
}
switch v := t.Value.(type) {
case bool:
return true
case string:
return v == "high" || v == "medium" || v == "low"
default:
return false
}
}
// IsBool returns true if the value is a boolean
func (t *ThinkValue) IsBool() bool {
if t == nil || t.Value == nil {
return false
}
_, ok := t.Value.(bool)
return ok
}
// IsString returns true if the value is a string
func (t *ThinkValue) IsString() bool {
if t == nil || t.Value == nil {
return false
}
_, ok := t.Value.(string)
return ok
}
// Bool returns the value as a bool (true if enabled in any way)
func (t *ThinkValue) Bool() bool {
if t == nil || t.Value == nil {
return false
}
switch v := t.Value.(type) {
case bool:
return v
case string:
// Any string value ("high", "medium", "low") means thinking is enabled
return v == "high" || v == "medium" || v == "low"
default:
return false
}
}
// String returns the value as a string
func (t *ThinkValue) String() string {
if t == nil || t.Value == nil {
return ""
}
switch v := t.Value.(type) {
case string:
return v
case bool:
if v {
return "medium" // Default level when just true
}
return ""
default:
return ""
}
}
// UnmarshalJSON implements json.Unmarshaler
func (t *ThinkValue) UnmarshalJSON(data []byte) error {
// Try to unmarshal as bool first
var b bool
if err := json.Unmarshal(data, &b); err == nil {
t.Value = b
return nil
}
// Try to unmarshal as string
var s string
if err := json.Unmarshal(data, &s); err == nil {
// Validate string values
if s != "high" && s != "medium" && s != "low" {
return fmt.Errorf("invalid think value: %q (must be \"high\", \"medium\", \"low\", true, or false)", s)
}
t.Value = s
return nil
}
return fmt.Errorf("think must be a boolean or string (\"high\", \"medium\", \"low\", true, or false)")
}
// MarshalJSON implements json.Marshaler
func (t *ThinkValue) MarshalJSON() ([]byte, error) {
if t == nil || t.Value == nil {
return []byte("null"), nil
}
return json.Marshal(t.Value)
}
type Duration struct {
time.Duration
}
@@ -699,7 +1034,7 @@ func (d *Duration) UnmarshalJSON(b []byte) (err error) {
if t < 0 {
d.Duration = time.Duration(math.MaxInt64)
} else {
d.Duration = time.Duration(int(t) * int(time.Second))
d.Duration = time.Duration(t * float64(time.Second))
}
case string:
d.Duration, err = time.ParseDuration(t)

View File

@@ -17,6 +17,11 @@ func TestKeepAliveParsingFromJSON(t *testing.T) {
req string
exp *Duration
}{
{
name: "Unset",
req: `{ }`,
exp: nil,
},
{
name: "Positive Integer",
req: `{ "keep_alive": 42 }`,
@@ -25,7 +30,7 @@ func TestKeepAliveParsingFromJSON(t *testing.T) {
{
name: "Positive Float",
req: `{ "keep_alive": 42.5 }`,
exp: &Duration{42 * time.Second},
exp: &Duration{42500 * time.Millisecond},
},
{
name: "Positive Integer String",
@@ -293,6 +298,68 @@ func TestToolFunction_UnmarshalJSON(t *testing.T) {
}
}
func TestToolFunctionParameters_MarshalJSON(t *testing.T) {
tests := []struct {
name string
input ToolFunctionParameters
expected string
}{
{
name: "simple object with string property",
input: ToolFunctionParameters{
Type: "object",
Required: []string{"name"},
Properties: map[string]ToolProperty{
"name": {Type: PropertyType{"string"}},
},
},
expected: `{"type":"object","required":["name"],"properties":{"name":{"type":"string"}}}`,
},
{
name: "no required",
input: ToolFunctionParameters{
Type: "object",
Properties: map[string]ToolProperty{
"name": {Type: PropertyType{"string"}},
},
},
expected: `{"type":"object","properties":{"name":{"type":"string"}}}`,
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
data, err := json.Marshal(test.input)
require.NoError(t, err)
assert.Equal(t, test.expected, string(data))
})
}
}
func TestToolCallFunction_IndexAlwaysMarshals(t *testing.T) {
fn := ToolCallFunction{
Name: "echo",
Arguments: ToolCallFunctionArguments{"message": "hi"},
}
data, err := json.Marshal(fn)
require.NoError(t, err)
raw := map[string]any{}
require.NoError(t, json.Unmarshal(data, &raw))
require.Contains(t, raw, "index")
assert.Equal(t, float64(0), raw["index"])
fn.Index = 3
data, err = json.Marshal(fn)
require.NoError(t, err)
raw = map[string]any{}
require.NoError(t, json.Unmarshal(data, &raw))
require.Contains(t, raw, "index")
assert.Equal(t, float64(3), raw["index"])
}
func TestPropertyType_UnmarshalJSON(t *testing.T) {
tests := []struct {
name string
@@ -374,24 +441,21 @@ func TestPropertyType_MarshalJSON(t *testing.T) {
}
func TestThinking_UnmarshalJSON(t *testing.T) {
trueVal := true
falseVal := false
tests := []struct {
name string
input string
expectedThinking *bool
expectedThinking *ThinkValue
expectedError bool
}{
{
name: "true",
input: `{ "think": true }`,
expectedThinking: &trueVal,
expectedThinking: &ThinkValue{Value: true},
},
{
name: "false",
input: `{ "think": false }`,
expectedThinking: &falseVal,
expectedThinking: &ThinkValue{Value: false},
},
{
name: "unset",
@@ -399,8 +463,23 @@ func TestThinking_UnmarshalJSON(t *testing.T) {
expectedThinking: nil,
},
{
name: "invalid",
input: `{ "think": "true" }`,
name: "string_high",
input: `{ "think": "high" }`,
expectedThinking: &ThinkValue{Value: "high"},
},
{
name: "string_medium",
input: `{ "think": "medium" }`,
expectedThinking: &ThinkValue{Value: "medium"},
},
{
name: "string_low",
input: `{ "think": "low" }`,
expectedThinking: &ThinkValue{Value: "low"},
},
{
name: "invalid_string",
input: `{ "think": "invalid" }`,
expectedThinking: nil,
expectedError: true,
},
@@ -414,8 +493,60 @@ func TestThinking_UnmarshalJSON(t *testing.T) {
require.Error(t, err)
} else {
require.NoError(t, err)
assert.Equal(t, test.expectedThinking, req.Think)
if test.expectedThinking == nil {
assert.Nil(t, req.Think)
} else {
require.NotNil(t, req.Think)
assert.Equal(t, test.expectedThinking.Value, req.Think.Value)
}
}
})
}
}
func TestToolFunctionParameters_String(t *testing.T) {
tests := []struct {
name string
params ToolFunctionParameters
expected string
}{
{
name: "simple object with string property",
params: ToolFunctionParameters{
Type: "object",
Required: []string{"name"},
Properties: map[string]ToolProperty{
"name": {
Type: PropertyType{"string"},
Description: "The name of the person",
},
},
},
expected: `{"type":"object","required":["name"],"properties":{"name":{"type":"string","description":"The name of the person"}}}`,
},
{
name: "marshal failure returns empty string",
params: ToolFunctionParameters{
Type: "object",
Defs: func() any {
// Create a cycle that will cause json.Marshal to fail
type selfRef struct {
Self *selfRef
}
s := &selfRef{}
s.Self = s
return s
}(),
Properties: map[string]ToolProperty{},
},
expected: "",
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
result := test.params.String()
assert.Equal(t, test.expected, result)
})
}
}

View File

@@ -0,0 +1,142 @@
package api
import (
"testing"
)
func TestToolParameterToTypeScriptType(t *testing.T) {
tests := []struct {
name string
param ToolProperty
expected string
}{
{
name: "single string type",
param: ToolProperty{
Type: PropertyType{"string"},
},
expected: "string",
},
{
name: "single number type",
param: ToolProperty{
Type: PropertyType{"number"},
},
expected: "number",
},
{
name: "integer maps to number",
param: ToolProperty{
Type: PropertyType{"integer"},
},
expected: "number",
},
{
name: "boolean type",
param: ToolProperty{
Type: PropertyType{"boolean"},
},
expected: "boolean",
},
{
name: "array type",
param: ToolProperty{
Type: PropertyType{"array"},
},
expected: "any[]",
},
{
name: "object type",
param: ToolProperty{
Type: PropertyType{"object"},
},
expected: "Record<string, any>",
},
{
name: "null type",
param: ToolProperty{
Type: PropertyType{"null"},
},
expected: "null",
},
{
name: "multiple types as union",
param: ToolProperty{
Type: PropertyType{"string", "number"},
},
expected: "string | number",
},
{
name: "string or null union",
param: ToolProperty{
Type: PropertyType{"string", "null"},
},
expected: "string | null",
},
{
name: "anyOf with single types",
param: ToolProperty{
AnyOf: []ToolProperty{
{Type: PropertyType{"string"}},
{Type: PropertyType{"number"}},
},
},
expected: "string | number",
},
{
name: "anyOf with multiple types in each branch",
param: ToolProperty{
AnyOf: []ToolProperty{
{Type: PropertyType{"string", "null"}},
{Type: PropertyType{"number"}},
},
},
expected: "string | null | number",
},
{
name: "nested anyOf",
param: ToolProperty{
AnyOf: []ToolProperty{
{Type: PropertyType{"boolean"}},
{
AnyOf: []ToolProperty{
{Type: PropertyType{"string"}},
{Type: PropertyType{"number"}},
},
},
},
},
expected: "boolean | string | number",
},
{
name: "empty type returns any",
param: ToolProperty{
Type: PropertyType{},
},
expected: "any",
},
{
name: "unknown type maps to any",
param: ToolProperty{
Type: PropertyType{"unknown_type"},
},
expected: "any",
},
{
name: "multiple types including array",
param: ToolProperty{
Type: PropertyType{"string", "array", "null"},
},
expected: "string | any[] | null",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := tt.param.ToTypeScriptType()
if result != tt.expected {
t.Errorf("ToTypeScriptType() = %q, want %q", result, tt.expected)
}
})
}
}

10
app/.gitignore vendored
View File

@@ -1 +1,11 @@
ollama.syso
*.crt
*.exe
/app/app
/app/squirrel
ollama
*cover*
.vscode
.env
.DS_Store
.claude

View File

@@ -1,22 +1,97 @@
# Ollama App
# Ollama for macOS and Windows
## Linux
## Download
TODO
- [macOS](https://github.com/ollama/app/releases/download/latest/Ollama.dmg)
- [Windows](https://github.com/ollama/app/releases/download/latest/OllamaSetup.exe)
## MacOS
## Development
TODO
### Desktop App
## Windows
```bash
go generate ./... &&
go run ./cmd/app
```
### UI Development
#### Setup
Install required tools:
```bash
go install github.com/tkrajina/typescriptify-golang-structs/tscriptify@latest
```
#### Develop UI (Development Mode)
1. Start the React development server (with hot-reload):
```bash
cd ui/app
npm install
npm run dev
```
2. In a separate terminal, run the Ollama app with the `-dev` flag:
```bash
go generate ./... &&
OLLAMA_DEBUG=1 go run ./cmd/app -dev
```
The `-dev` flag enables:
- Loading the UI from the Vite dev server at http://localhost:5173
- Fixed UI server port at http://127.0.0.1:3001 for API requests
- CORS headers for cross-origin requests
- Hot-reload support for UI development
## Build
### Windows
If you want to build the installer, youll need to install
- https://jrsoftware.org/isinfo.php
In the top directory of this repo, run the following powershell script
to build the ollama CLI, ollama app, and ollama installer.
**Dependencies** - either build a local copy of ollama, or use a github release
```powershell
powershell -ExecutionPolicy Bypass -File .\scripts\build_windows.ps1
# Local dependencies
.\scripts\deps_local.ps1
# Release dependencies
.\scripts\deps_release.ps1 0.6.8
```
**Build**
```powershell
.\scripts\build_windows.ps1
```
### macOS
CI builds with Xcode 14.1 for OS compatibility prior to v13. If you want to manually build v11+ support, you can download the older Xcode [here](https://developer.apple.com/services-account/download?path=/Developer_Tools/Xcode_14.1/Xcode_14.1.xip), extract, then `mv ./Xcode.app /Applications/Xcode_14.1.0.app` then activate with:
```
export CGO_CFLAGS=-mmacosx-version-min=12.0
export CGO_CXXFLAGS=-mmacosx-version-min=12.0
export CGO_LDFLAGS=-mmacosx-version-min=12.0
export SDKROOT=/Applications/Xcode_14.1.0.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk
export DEVELOPER_DIR=/Applications/Xcode_14.1.0.app/Contents/Developer
```
**Dependencies** - either build a local copy of Ollama, or use a GitHub release:
```sh
# Local dependencies
./scripts/deps_local.sh
# Release dependencies
./scripts/deps_release.sh 0.6.8
```
**Build**
```sh
./scripts/build_darwin.sh
```

View File

@@ -1,3 +1,5 @@
//go:build windows || darwin
package assets
import (

BIN
app/assets/background.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 89 KiB

After

Width:  |  Height:  |  Size: 115 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 91 KiB

After

Width:  |  Height:  |  Size: 116 KiB

26
app/auth/connect.go Normal file
View File

@@ -0,0 +1,26 @@
//go:build windows || darwin
package auth
import (
"encoding/base64"
"fmt"
"net/url"
"os"
"github.com/ollama/ollama/auth"
)
// BuildConnectURL generates the connect URL with the public key and device name
func BuildConnectURL(baseURL string) (string, error) {
pubKey, err := auth.GetPublicKey()
if err != nil {
return "", fmt.Errorf("failed to get public key: %w", err)
}
encodedKey := base64.RawURLEncoding.EncodeToString([]byte(pubKey))
hostname, _ := os.Hostname()
encodedDevice := url.QueryEscape(hostname)
return fmt.Sprintf("%s/connect?name=%s&key=%s&launch=true", baseURL, encodedDevice, encodedKey), nil
}

View File

@@ -0,0 +1,7 @@
#import <Cocoa/Cocoa.h>
@interface AppDelegate : NSObject <NSApplicationDelegate>
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification;
@end

473
app/cmd/app/app.go Normal file
View File

@@ -0,0 +1,473 @@
//go:build windows || darwin
package main
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"log/slog"
"net"
"net/http"
"net/url"
"os"
"os/exec"
"os/signal"
"path/filepath"
"runtime"
"strings"
"syscall"
"time"
"github.com/google/uuid"
"github.com/ollama/ollama/app/auth"
"github.com/ollama/ollama/app/logrotate"
"github.com/ollama/ollama/app/server"
"github.com/ollama/ollama/app/store"
"github.com/ollama/ollama/app/tools"
"github.com/ollama/ollama/app/ui"
"github.com/ollama/ollama/app/updater"
"github.com/ollama/ollama/app/version"
)
var (
wv = &Webview{}
uiServerPort int
)
var debug = strings.EqualFold(os.Getenv("OLLAMA_DEBUG"), "true") || os.Getenv("OLLAMA_DEBUG") == "1"
var (
fastStartup = false
devMode = false
)
type appMove int
const (
CannotMove appMove = iota
UserDeclinedMove
MoveCompleted
AlreadyMoved
LoginSession
PermissionDenied
MoveError
)
func main() {
startHidden := false
var urlSchemeRequest string
if len(os.Args) > 1 {
for _, arg := range os.Args {
// Handle URL scheme requests (Windows)
if strings.HasPrefix(arg, "ollama://") {
urlSchemeRequest = arg
slog.Info("received URL scheme request", "url", arg)
continue
}
switch arg {
case "serve":
fmt.Fprintln(os.Stderr, "serve command not supported, use ollama")
os.Exit(1)
case "version", "-v", "--version":
fmt.Println(version.Version)
os.Exit(0)
case "background":
// When running the process in this "background" mode, we spawn a
// child process for the main app. This is necessary so the
// "Allow in the Background" setting in MacOS can be unchecked
// without breaking the main app. Two copies of the app are
// present in the bundle, one for the main app and one for the
// background initiator.
fmt.Fprintln(os.Stdout, "starting in background")
runInBackground()
os.Exit(0)
case "hidden", "-j", "--hide":
// startHidden suppresses the UI on startup, and can be triggered multiple ways
// On windows, path based via login startup detection
// On MacOS via [NSApp isHidden] from `open -j -a /Applications/Ollama.app` or equivalent
// On both via the "hidden" command line argument
startHidden = true
case "--fast-startup":
// Skip optional steps like pending updates to start quickly for immediate use
fastStartup = true
case "-dev", "--dev":
// Development mode: use local dev server and enable CORS
devMode = true
}
}
}
level := slog.LevelInfo
if debug {
level = slog.LevelDebug
}
logrotate.Rotate(appLogPath)
if _, err := os.Stat(filepath.Dir(appLogPath)); errors.Is(err, os.ErrNotExist) {
if err := os.MkdirAll(filepath.Dir(appLogPath), 0o755); err != nil {
slog.Error(fmt.Sprintf("failed to create server log dir %v", err))
return
}
}
var logFile io.Writer
var err error
logFile, err = os.OpenFile(appLogPath, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0o755)
if err != nil {
slog.Error(fmt.Sprintf("failed to create server log %v", err))
return
}
// Detect if we're a GUI app on windows, and if not, send logs to console as well
if os.Stderr.Fd() != 0 {
// Console app detected
logFile = io.MultiWriter(os.Stderr, logFile)
}
handler := slog.NewTextHandler(logFile, &slog.HandlerOptions{
Level: level,
AddSource: true,
ReplaceAttr: func(_ []string, attr slog.Attr) slog.Attr {
if attr.Key == slog.SourceKey {
source := attr.Value.Any().(*slog.Source)
source.File = filepath.Base(source.File)
}
return attr
},
})
slog.SetDefault(slog.New(handler))
logStartup()
// On Windows, check if another instance is running and send URL to it
// Do this after logging is set up so we can debug issues
if runtime.GOOS == "windows" && urlSchemeRequest != "" {
slog.Debug("checking for existing instance", "url", urlSchemeRequest)
if checkAndHandleExistingInstance(urlSchemeRequest) {
// The function will exit if it successfully sends to another instance
// If we reach here, we're the first/only instance
} else {
// No existing instance found, handle the URL scheme in this instance
go func() {
handleURLSchemeInCurrentInstance(urlSchemeRequest)
}()
}
}
if u := os.Getenv("OLLAMA_UPDATE_URL"); u != "" {
updater.UpdateCheckURLBase = u
}
// Detect if this is a first start after an upgrade, in
// which case we need to do some cleanup
var skipMove bool
if _, err := os.Stat(updater.UpgradeMarkerFile); err == nil {
slog.Debug("first start after upgrade")
err = updater.DoPostUpgradeCleanup()
if err != nil {
slog.Error("failed to cleanup prior version", "error", err)
}
// We never prompt to move the app after an upgrade
skipMove = true
// Start hidden after updates to prevent UI from opening automatically
startHidden = true
}
if !skipMove && !fastStartup {
if maybeMoveAndRestart() == MoveCompleted {
return
}
}
// Check if another instance is already running
// On Windows, focus the existing instance; on other platforms, kill it
handleExistingInstance(startHidden)
// on macOS, offer the user to create a symlink
// from /usr/local/bin/ollama to the app bundle
installSymlink()
var ln net.Listener
if devMode {
// Use a fixed port in dev mode for predictable API access
ln, err = net.Listen("tcp", "127.0.0.1:3001")
} else {
ln, err = net.Listen("tcp", "127.0.0.1:0")
}
if err != nil {
slog.Error("failed to find available port", "error", err)
return
}
port := ln.Addr().(*net.TCPAddr).Port
token := uuid.NewString()
wv.port = port
wv.token = token
uiServerPort = port
st := &store.Store{}
// Enable CORS in development mode
if devMode {
os.Setenv("OLLAMA_CORS", "1")
// Check if Vite dev server is running on port 5173
var conn net.Conn
var err error
for _, addr := range []string{"127.0.0.1:5173", "localhost:5173"} {
conn, err = net.DialTimeout("tcp", addr, 2*time.Second)
if err == nil {
conn.Close()
break
}
}
if err != nil {
slog.Error("Vite dev server not running on port 5173")
fmt.Fprintln(os.Stderr, "Error: Vite dev server is not running on port 5173")
fmt.Fprintln(os.Stderr, "Please run 'npm run dev' in the ui/app directory to start the UI in development mode")
os.Exit(1)
}
}
// Initialize tools registry
toolRegistry := tools.NewRegistry()
slog.Info("initialized tools registry", "tool_count", len(toolRegistry.List()))
// ctx is the app-level context that will be used to stop the app
ctx, cancel := context.WithCancel(context.Background())
// octx is the ollama server context that will be used to stop the ollama server
octx, ocancel := context.WithCancel(ctx)
// TODO (jmorganca): instead we should instantiate the
// webview with the store instead of assigning it here, however
// making the webview a global variable is easier for now
wv.Store = st
done := make(chan error, 1)
osrv := server.New(st, devMode)
go func() {
slog.Info("starting ollama server")
done <- osrv.Run(octx)
}()
uiServer := ui.Server{
Token: token,
Restart: func() {
ocancel()
<-done
octx, ocancel = context.WithCancel(ctx)
go func() {
done <- osrv.Run(octx)
}()
},
Store: st,
ToolRegistry: toolRegistry,
Dev: devMode,
Logger: slog.Default(),
}
srv := &http.Server{
Handler: uiServer.Handler(),
}
if _, err := uiServer.UserData(ctx); err != nil {
slog.Warn("failed to load user data", "error", err)
}
// Start the UI server
slog.Info("starting ui server", "port", port)
go func() {
slog.Debug("starting ui server on port", "port", port)
err = srv.Serve(ln)
if err != nil && !errors.Is(err, http.ErrServerClosed) {
slog.Warn("desktop server", "error", err)
}
slog.Debug("background desktop server done")
}()
updater := &updater.Updater{Store: st}
updater.StartBackgroundUpdaterChecker(ctx, UpdateAvailable)
hasCompletedFirstRun, err := st.HasCompletedFirstRun()
if err != nil {
slog.Error("failed to load has completed first run", "error", err)
}
if !hasCompletedFirstRun {
err = st.SetHasCompletedFirstRun(true)
if err != nil {
slog.Error("failed to set has completed first run", "error", err)
}
}
// capture SIGINT and SIGTERM signals and gracefully shutdown the app
signals := make(chan os.Signal, 1)
signal.Notify(signals, syscall.SIGINT, syscall.SIGTERM)
go func() {
<-signals
slog.Info("received SIGINT or SIGTERM signal, shutting down")
quit()
}()
if urlSchemeRequest != "" {
go func() {
handleURLSchemeInCurrentInstance(urlSchemeRequest)
}()
} else {
slog.Debug("no URL scheme request to handle")
}
osRun(cancel, hasCompletedFirstRun, startHidden)
slog.Info("shutting down desktop server")
if err := srv.Close(); err != nil {
slog.Warn("error shutting down desktop server", "error", err)
}
slog.Info("shutting down ollama server")
cancel()
<-done
}
func startHiddenTasks() {
// If an upgrade is ready and we're in hidden mode, perform it at startup.
// If we're not in hidden mode, we want to start as fast as possible and not
// slow the user down with an upgrade.
if updater.IsUpdatePending() {
if fastStartup {
// CLI triggered app startup use-case
slog.Info("deferring pending update for fast startup")
} else {
if err := updater.DoUpgradeAtStartup(); err != nil {
slog.Info("unable to perform upgrade at startup", "error", err)
// Make sure the restart to upgrade menu shows so we can attempt an interactive upgrade to get authorization
UpdateAvailable("")
} else {
slog.Debug("launching new version...")
// TODO - consider a timer that aborts if this takes too long and we haven't been killed yet...
LaunchNewApp()
os.Exit(0)
}
}
}
}
func checkUserLoggedIn(uiServerPort int) bool {
if uiServerPort == 0 {
slog.Debug("UI server not ready yet, skipping auth check")
return false
}
resp, err := http.Get(fmt.Sprintf("http://127.0.0.1:%d/api/v1/me", uiServerPort))
if err != nil {
slog.Debug("failed to call local auth endpoint", "error", err)
return false
}
defer resp.Body.Close()
// Check if the response is successful
if resp.StatusCode != http.StatusOK {
slog.Debug("auth endpoint returned non-OK status", "status", resp.StatusCode)
return false
}
var user struct {
ID string `json:"id"`
Name string `json:"name"`
}
if err := json.NewDecoder(resp.Body).Decode(&user); err != nil {
slog.Debug("failed to parse user response", "error", err)
return false
}
// Verify we have a valid user with an ID and name
if user.ID == "" || user.Name == "" {
slog.Debug("user response missing required fields", "id", user.ID, "name", user.Name)
return false
}
slog.Debug("user is logged in", "user_id", user.ID, "user_name", user.Name)
return true
}
// handleConnectURLScheme fetches the connect URL and opens it in the browser
func handleConnectURLScheme() {
if checkUserLoggedIn(uiServerPort) {
slog.Info("user is already logged in, opening app instead")
showWindow(wv.webview.Window())
return
}
connectURL, err := auth.BuildConnectURL("https://ollama.com")
if err != nil {
slog.Error("failed to build connect URL", "error", err)
openInBrowser("https://ollama.com/connect")
return
}
openInBrowser(connectURL)
}
// openInBrowser opens the specified URL in the default browser
func openInBrowser(url string) {
var cmd string
var args []string
switch runtime.GOOS {
case "windows":
cmd = "rundll32"
args = []string{"url.dll,FileProtocolHandler", url}
case "darwin":
cmd = "open"
args = []string{url}
default: // "linux", "freebsd", "openbsd", "netbsd"... should not reach here
slog.Warn("unsupported OS for openInBrowser", "os", runtime.GOOS)
}
slog.Info("executing browser command", "cmd", cmd, "args", args)
if err := exec.Command(cmd, args...).Start(); err != nil {
slog.Error("failed to open URL in browser", "url", url, "cmd", cmd, "args", args, "error", err)
}
}
// parseURLScheme parses an ollama:// URL and validates it
// Supports: ollama:// (open app) and ollama://connect (OAuth)
func parseURLScheme(urlSchemeRequest string) (isConnect bool, err error) {
parsedURL, err := url.Parse(urlSchemeRequest)
if err != nil {
return false, fmt.Errorf("invalid URL: %w", err)
}
// Check if this is a connect URL
if parsedURL.Host == "connect" || strings.TrimPrefix(parsedURL.Path, "/") == "connect" {
return true, nil
}
// Allow bare ollama:// or ollama:/// to open the app
if (parsedURL.Host == "" && parsedURL.Path == "") || parsedURL.Path == "/" {
return false, nil
}
return false, fmt.Errorf("unsupported ollama:// URL path: %s", urlSchemeRequest)
}
// handleURLSchemeInCurrentInstance processes URL scheme requests in the current instance
func handleURLSchemeInCurrentInstance(urlSchemeRequest string) {
isConnect, err := parseURLScheme(urlSchemeRequest)
if err != nil {
slog.Error("failed to parse URL scheme request", "url", urlSchemeRequest, "error", err)
return
}
if isConnect {
handleConnectURLScheme()
} else {
if wv.webview != nil {
showWindow(wv.webview.Window())
}
}
}

269
app/cmd/app/app_darwin.go Normal file
View File

@@ -0,0 +1,269 @@
//go:build windows || darwin
package main
// #cgo CFLAGS: -x objective-c
// #cgo LDFLAGS: -framework Webkit -framework Cocoa -framework LocalAuthentication -framework ServiceManagement
// #include "app_darwin.h"
// #include "../../updater/updater_darwin.h"
// typedef const char cchar_t;
import "C"
import (
"log/slog"
"os"
"os/exec"
"path/filepath"
"strings"
"time"
"unsafe"
"github.com/ollama/ollama/app/updater"
"github.com/ollama/ollama/app/version"
)
var ollamaPath = func() string {
if updater.BundlePath != "" {
return filepath.Join(updater.BundlePath, "Contents", "Resources", "ollama")
}
pwd, err := os.Getwd()
if err != nil {
slog.Warn("failed to get pwd", "error", err)
return ""
}
return filepath.Join(pwd, "ollama")
}()
var (
isApp = updater.BundlePath != ""
appLogPath = filepath.Join(os.Getenv("HOME"), ".ollama", "logs", "app.log")
launchAgentPath = filepath.Join(os.Getenv("HOME"), "Library", "LaunchAgents", "com.ollama.ollama.plist")
)
// TODO(jmorganca): pre-create the window and pass
// it to the webview instead of using the internal one
//
//export StartUI
func StartUI(path *C.cchar_t) {
p := C.GoString(path)
wv.Run(p)
styleWindow(wv.webview.Window())
C.setWindowDelegate(wv.webview.Window())
}
//export ShowUI
func ShowUI() {
// If webview is already running, just show the window
if wv.IsRunning() && wv.webview != nil {
showWindow(wv.webview.Window())
} else {
root := C.CString("/")
defer C.free(unsafe.Pointer(root))
StartUI(root)
}
}
//export StopUI
func StopUI() {
wv.Terminate()
}
//export StartUpdate
func StartUpdate() {
if err := updater.DoUpgrade(true); err != nil {
slog.Error("upgrade failed", "error", err)
return
}
slog.Debug("launching new version...")
// TODO - consider a timer that aborts if this takes too long and we haven't been killed yet...
LaunchNewApp()
// not reached if upgrade works, the new app will kill this process
}
//export darwinStartHiddenTasks
func darwinStartHiddenTasks() {
startHiddenTasks()
}
func init() {
// Temporary code to mimic Squirrel ShipIt behavior
if len(os.Args) > 2 {
if os.Args[1] == "___launch___" {
path := strings.TrimPrefix(os.Args[2], "file://")
slog.Info("Ollama binary called as ShipIt - launching", "app", path)
appName := C.CString(path)
defer C.free(unsafe.Pointer(appName))
C.launchApp(appName)
slog.Info("other instance has been launched")
time.Sleep(5 * time.Second)
slog.Info("exiting with zero status")
os.Exit(0)
}
}
}
// maybeMoveAndRestart checks if we should relocate
// and returns true if we did and should immediately exit
func maybeMoveAndRestart() appMove {
if updater.BundlePath == "" {
// Typically developer mode with 'go run ./cmd/app'
return CannotMove
}
// Respect users intent if they chose "keep" vs. "replace" when dragging to Applications
if strings.HasPrefix(updater.BundlePath, strings.TrimSuffix(updater.SystemWidePath, filepath.Ext(updater.SystemWidePath))) {
return AlreadyMoved
}
// Ask to move to applications directory
status := (appMove)(C.askToMoveToApplications())
if status == MoveCompleted {
// Double check
if _, err := os.Stat(updater.SystemWidePath); err != nil {
slog.Warn("stat failure after move", "path", updater.SystemWidePath, "error", err)
return MoveError
}
}
return status
}
// handleExistingInstance handles existing instances on macOS
func handleExistingInstance(_ bool) {
C.killOtherInstances()
}
func installSymlink() {
if !isApp {
return
}
cliPath := C.CString(ollamaPath)
defer C.free(unsafe.Pointer(cliPath))
// Check the users path first
cmd, _ := exec.LookPath("ollama")
if cmd != "" {
resolved, err := os.Readlink(cmd)
if err == nil {
tmp, err := filepath.Abs(resolved)
if err == nil {
resolved = tmp
}
} else {
resolved = cmd
}
if resolved == ollamaPath {
slog.Info("ollama already in users PATH", "cli", cmd)
return
}
}
code := C.installSymlink(cliPath)
if code != 0 {
slog.Error("Failed to install symlink")
}
}
func UpdateAvailable(ver string) error {
slog.Debug("update detected, adjusting menu")
// TODO (jmorganca): find a better check for development mode than checking the bundle path
if updater.BundlePath != "" {
C.updateAvailable()
}
return nil
}
func osRun(_ func(), hasCompletedFirstRun, startHidden bool) {
registerLaunchAgent(hasCompletedFirstRun)
// Run the native macOS app
// Note: this will block until the app is closed
slog.Debug("starting native darwin event loop")
C.run(C._Bool(hasCompletedFirstRun), C._Bool(startHidden))
}
func quit() {
C.quit()
}
func LaunchNewApp() {
appName := C.CString(updater.BundlePath)
defer C.free(unsafe.Pointer(appName))
C.launchApp(appName)
}
// Send a request to the main app thread to load a UI page
func sendUIRequestMessage(path string) {
p := C.CString(path)
defer C.free(unsafe.Pointer(p))
C.uiRequest(p)
}
func registerLaunchAgent(hasCompletedFirstRun bool) {
// Remove any stale Login Item registrations
C.unregisterSelfFromLoginItem()
C.registerSelfAsLoginItem(C._Bool(hasCompletedFirstRun))
}
func logStartup() {
appPath := updater.BundlePath
if appPath == updater.SystemWidePath {
// Detect sandboxed scenario
exe, err := os.Executable()
if err == nil {
p := filepath.Dir(exe)
if filepath.Base(p) == "MacOS" {
p = filepath.Dir(filepath.Dir(p))
if p != appPath {
slog.Info("starting sandboxed Ollama", "app", appPath, "sandbox", p)
return
}
}
}
}
slog.Info("starting Ollama", "app", appPath, "version", version.Version, "OS", updater.UserAgentOS)
}
func hideWindow(ptr unsafe.Pointer) {
C.hideWindow(C.uintptr_t(uintptr(ptr)))
}
func showWindow(ptr unsafe.Pointer) {
C.showWindow(C.uintptr_t(uintptr(ptr)))
}
func styleWindow(ptr unsafe.Pointer) {
C.styleWindow(C.uintptr_t(uintptr(ptr)))
}
func runInBackground() {
cmd := exec.Command(filepath.Join(updater.BundlePath, "Contents", "MacOS", "Ollama"), "hidden")
if cmd != nil {
err := cmd.Run()
if err != nil {
slog.Error("failed to run Ollama", "bundlePath", updater.BundlePath, "error", err)
os.Exit(1)
}
} else {
slog.Error("failed to start Ollama in background", "bundlePath", updater.BundlePath)
os.Exit(1)
}
}
func drag(ptr unsafe.Pointer) {
C.drag(C.uintptr_t(uintptr(ptr)))
}
func doubleClick(ptr unsafe.Pointer) {
C.doubleClick(C.uintptr_t(uintptr(ptr)))
}
//export handleConnectURL
func handleConnectURL() {
handleConnectURLScheme()
}
// checkAndHandleExistingInstance is not needed on non-Windows platforms
func checkAndHandleExistingInstance(_ string) bool {
return false
}

43
app/cmd/app/app_darwin.h Normal file
View File

@@ -0,0 +1,43 @@
#import <Cocoa/Cocoa.h>
#import <Security/Security.h>
@interface AppDelegate : NSObject <NSApplicationDelegate>
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification;
@end
enum AppMove
{
CannotMove,
UserDeclinedMove,
MoveCompleted,
AlreadyMoved,
LoginSession,
PermissionDenied,
MoveError,
};
void run(bool firstTimeRun, bool startHidden);
void killOtherInstances();
enum AppMove askToMoveToApplications();
int createSymlinkWithAuthorization();
int installSymlink(const char *cliPath);
extern void Restart();
// extern void Quit();
void StartUI(const char *path);
void ShowUI();
void StopUI();
void StartUpdate();
void darwinStartHiddenTasks();
void launchApp(const char *appPath);
void updateAvailable();
void quit();
void uiRequest(char *path);
void registerSelfAsLoginItem(bool firstTimeRun);
void unregisterSelfFromLoginItem();
void setWindowDelegate(void *window);
void showWindow(uintptr_t wndPtr);
void hideWindow(uintptr_t wndPtr);
void styleWindow(uintptr_t wndPtr);
void drag(uintptr_t wndPtr);
void doubleClick(uintptr_t wndPtr);
void handleConnectURL();

1112
app/cmd/app/app_darwin.m Normal file

File diff suppressed because it is too large Load Diff

441
app/cmd/app/app_windows.go Normal file
View File

@@ -0,0 +1,441 @@
//go:build windows || darwin
package main
import (
"errors"
"fmt"
"io"
"log"
"log/slog"
"os"
"os/exec"
"os/signal"
"path/filepath"
"runtime"
"strings"
"syscall"
"unsafe"
"github.com/ollama/ollama/app/updater"
"github.com/ollama/ollama/app/version"
"github.com/ollama/ollama/app/wintray"
"golang.org/x/sys/windows"
)
var (
u32 = windows.NewLazySystemDLL("User32.dll")
pBringWindowToTop = u32.NewProc("BringWindowToTop")
pShowWindow = u32.NewProc("ShowWindow")
pSendMessage = u32.NewProc("SendMessageA")
pGetSystemMetrics = u32.NewProc("GetSystemMetrics")
pGetWindowRect = u32.NewProc("GetWindowRect")
pSetWindowPos = u32.NewProc("SetWindowPos")
pSetForegroundWindow = u32.NewProc("SetForegroundWindow")
pSetActiveWindow = u32.NewProc("SetActiveWindow")
pIsIconic = u32.NewProc("IsIconic")
appPath = filepath.Join(os.Getenv("LOCALAPPDATA"), "Programs", "Ollama")
appLogPath = filepath.Join(os.Getenv("LOCALAPPDATA"), "Ollama", "app.log")
startupShortcut = filepath.Join(os.Getenv("APPDATA"), "Microsoft", "Windows", "Start Menu", "Programs", "Startup", "Ollama.lnk")
ollamaPath string
DesktopAppName = "ollama app.exe"
)
func init() {
// With alternate install location use executable location
exe, err := os.Executable()
if err != nil {
slog.Warn("error discovering executable directory", "error", err)
} else {
appPath = filepath.Dir(exe)
}
ollamaPath = filepath.Join(appPath, "ollama.exe")
// Handle developer mode (go run ./cmd/app)
if _, err := os.Stat(ollamaPath); err != nil {
pwd, err := os.Getwd()
if err != nil {
slog.Warn("missing ollama.exe and failed to get pwd", "error", err)
return
}
distAppPath := filepath.Join(pwd, "dist", "windows-"+runtime.GOARCH)
distOllamaPath := filepath.Join(distAppPath, "ollama.exe")
if _, err := os.Stat(distOllamaPath); err == nil {
slog.Info("detected developer mode")
appPath = distAppPath
ollamaPath = distOllamaPath
}
}
}
func maybeMoveAndRestart() appMove {
return 0
}
// handleExistingInstance checks for existing instances and optionally focuses them
func handleExistingInstance(startHidden bool) {
if wintray.CheckAndFocusExistingInstance(!startHidden) {
slog.Info("existing instance found, exiting")
os.Exit(0)
}
}
func installSymlink() {}
type appCallbacks struct {
t wintray.TrayCallbacks
shutdown func()
}
var app = &appCallbacks{}
func (ac *appCallbacks) UIRun(path string) {
wv.Run(path)
}
func (*appCallbacks) UIShow() {
if wv.webview != nil {
showWindow(wv.webview.Window())
} else {
wv.Run("/")
}
}
func (*appCallbacks) UITerminate() {
wv.Terminate()
}
func (*appCallbacks) UIRunning() bool {
return wv.IsRunning()
}
func (app *appCallbacks) Quit() {
app.t.Quit()
wv.Terminate()
}
// TODO - reconcile with above for consistency between mac/windows
func quit() {
wv.Terminate()
}
func (app *appCallbacks) DoUpdate() {
// Safeguard in case we have requests in flight that need to drain...
slog.Info("Waiting for server to shutdown")
app.shutdown()
if err := updater.DoUpgrade(true); err != nil {
slog.Warn(fmt.Sprintf("upgrade attempt failed: %s", err))
}
}
// HandleURLScheme implements the URLSchemeHandler interface
func (app *appCallbacks) HandleURLScheme(urlScheme string) {
handleURLSchemeRequest(urlScheme)
}
// handleURLSchemeRequest processes URL scheme requests from other instances
func handleURLSchemeRequest(urlScheme string) {
isConnect, err := parseURLScheme(urlScheme)
if err != nil {
slog.Error("failed to parse URL scheme request", "url", urlScheme, "error", err)
return
}
if isConnect {
handleConnectURLScheme()
} else {
if wv.webview != nil {
showWindow(wv.webview.Window())
}
}
}
func UpdateAvailable(ver string) error {
return app.t.UpdateAvailable(ver)
}
func osRun(shutdown func(), hasCompletedFirstRun, startHidden bool) {
var err error
app.shutdown = shutdown
app.t, err = wintray.NewTray(app)
if err != nil {
log.Fatalf("Failed to start: %s", err)
}
signals := make(chan os.Signal, 1)
signal.Notify(signals, syscall.SIGINT, syscall.SIGTERM)
// TODO - can this be generalized?
go func() {
<-signals
slog.Debug("shutting down due to signal")
app.t.Quit()
wv.Terminate()
}()
// On windows, we run the final tasks in the main thread
// before starting the tray event loop. These final tasks
// may trigger the UI, and must do that from the main thread.
if !startHidden {
// Determine if the process was started from a shortcut
// ~\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\Ollama
const STARTF_TITLEISLINKNAME = 0x00000800
var info windows.StartupInfo
if err := windows.GetStartupInfo(&info); err != nil {
slog.Debug("unable to retrieve startup info", "error", err)
} else if info.Flags&STARTF_TITLEISLINKNAME == STARTF_TITLEISLINKNAME {
linkPath := windows.UTF16PtrToString(info.Title)
if strings.Contains(linkPath, "Startup") {
startHidden = true
}
}
}
if startHidden {
startHiddenTasks()
} else {
ptr := wv.Run("/")
// Set the window icon using the tray icon
if ptr != nil {
iconHandle := app.t.GetIconHandle()
if iconHandle != 0 {
hwnd := uintptr(ptr)
const ICON_SMALL = 0
const ICON_BIG = 1
const WM_SETICON = 0x0080
pSendMessage.Call(hwnd, uintptr(WM_SETICON), uintptr(ICON_SMALL), uintptr(iconHandle))
pSendMessage.Call(hwnd, uintptr(WM_SETICON), uintptr(ICON_BIG), uintptr(iconHandle))
}
}
centerWindow(ptr)
}
if !hasCompletedFirstRun {
// Only create the login shortcut on first start
// so we can respect users deletion of the link
err = createLoginShortcut()
if err != nil {
slog.Warn("unable to create login shortcut", "error", err)
}
}
app.t.TrayRun() // This will block the main thread
}
func createLoginShortcut() error {
// The installer lays down a shortcut for us so we can copy it without
// having to resort to calling COM APIs to establish the shortcut
shortcutOrigin := filepath.Join(appPath, "lib", "Ollama.lnk")
_, err := os.Stat(startupShortcut)
if err != nil {
if errors.Is(err, os.ErrNotExist) {
in, err := os.Open(shortcutOrigin)
if err != nil {
return fmt.Errorf("unable to open shortcut %s : %w", shortcutOrigin, err)
}
defer in.Close()
out, err := os.Create(startupShortcut)
if err != nil {
return fmt.Errorf("unable to open startup link %s : %w", startupShortcut, err)
}
defer out.Close()
_, err = io.Copy(out, in)
if err != nil {
return fmt.Errorf("unable to copy shortcut %s : %w", startupShortcut, err)
}
err = out.Sync()
if err != nil {
return fmt.Errorf("unable to sync shortcut %s : %w", startupShortcut, err)
}
slog.Info("Created Startup shortcut", "shortcut", startupShortcut)
} else {
slog.Warn("unexpected error looking up Startup shortcut", "error", err)
}
} else {
slog.Debug("Startup link already exists", "shortcut", startupShortcut)
}
return nil
}
// Send a request to the main app thread to load a UI page
func sendUIRequestMessage(path string) {
wintray.SendUIRequestMessage(path)
}
func LaunchNewApp() {
}
func logStartup() {
slog.Info("starting Ollama", "app", appPath, "version", version.Version, "OS", updater.UserAgentOS)
}
const (
SW_HIDE = 0 // Hides the window
SW_SHOW = 5 // Shows window in its current size/position
SW_SHOWNA = 8 // Shows without activating
SW_MINIMIZE = 6 // Minimizes the window
SW_RESTORE = 9 // Restores to previous size/position
SW_SHOWDEFAULT = 10 // Sets show state based on program state
SM_CXSCREEN = 0
SM_CYSCREEN = 1
HWND_TOP = 0
SWP_NOSIZE = 0x0001
SWP_NOMOVE = 0x0002
SWP_NOZORDER = 0x0004
SWP_SHOWWINDOW = 0x0040
// Menu constants
MF_STRING = 0x00000000
MF_SEPARATOR = 0x00000800
MF_GRAYED = 0x00000001
TPM_RETURNCMD = 0x0100
)
// POINT structure for cursor position
type POINT struct {
X int32
Y int32
}
// Rect structure for GetWindowRect
type Rect struct {
Left int32
Top int32
Right int32
Bottom int32
}
func centerWindow(ptr unsafe.Pointer) {
hwnd := uintptr(ptr)
if hwnd == 0 {
return
}
var rect Rect
pGetWindowRect.Call(hwnd, uintptr(unsafe.Pointer(&rect)))
screenWidth, _, _ := pGetSystemMetrics.Call(uintptr(SM_CXSCREEN))
screenHeight, _, _ := pGetSystemMetrics.Call(uintptr(SM_CYSCREEN))
windowWidth := rect.Right - rect.Left
windowHeight := rect.Bottom - rect.Top
x := (int32(screenWidth) - windowWidth) / 2
y := (int32(screenHeight) - windowHeight) / 2
// Ensure the window is not positioned off-screen
if x < 0 {
x = 0
}
if y < 0 {
y = 0
}
pSetWindowPos.Call(
hwnd,
uintptr(HWND_TOP),
uintptr(x),
uintptr(y),
uintptr(windowWidth), // Keep original width
uintptr(windowHeight), // Keep original height
uintptr(SWP_SHOWWINDOW),
)
}
func showWindow(ptr unsafe.Pointer) {
hwnd := uintptr(ptr)
if hwnd != 0 {
iconHandle := app.t.GetIconHandle()
if iconHandle != 0 {
const ICON_SMALL = 0
const ICON_BIG = 1
const WM_SETICON = 0x0080
pSendMessage.Call(hwnd, uintptr(WM_SETICON), uintptr(ICON_SMALL), uintptr(iconHandle))
pSendMessage.Call(hwnd, uintptr(WM_SETICON), uintptr(ICON_BIG), uintptr(iconHandle))
}
// Check if window is minimized
isMinimized, _, _ := pIsIconic.Call(hwnd)
if isMinimized != 0 {
// Restore the window if it's minimized
pShowWindow.Call(hwnd, uintptr(SW_RESTORE))
}
// Show the window
pShowWindow.Call(hwnd, uintptr(SW_SHOW))
// Bring window to top
pBringWindowToTop.Call(hwnd)
// Force window to foreground
pSetForegroundWindow.Call(hwnd)
// Make it the active window
pSetActiveWindow.Call(hwnd)
// Ensure window is positioned on top
pSetWindowPos.Call(
hwnd,
uintptr(HWND_TOP),
0, 0, 0, 0,
uintptr(SWP_NOSIZE|SWP_NOMOVE|SWP_SHOWWINDOW),
)
}
}
// HideWindow hides the application window
func hideWindow(ptr unsafe.Pointer) {
hwnd := uintptr(ptr)
if hwnd != 0 {
pShowWindow.Call(
hwnd,
uintptr(SW_HIDE),
)
}
}
func runInBackground() {
exe, err := os.Executable()
if err != nil {
slog.Error("failed to get executable path", "error", err)
os.Exit(1)
}
cmd := exec.Command(exe, "hidden")
if cmd != nil {
err = cmd.Run()
if err != nil {
slog.Error("failed to run Ollama", "exe", exe, "error", err)
os.Exit(1)
}
} else {
slog.Error("failed to start Ollama", "exe", exe)
os.Exit(1)
}
}
func drag(ptr unsafe.Pointer) {}
func doubleClick(ptr unsafe.Pointer) {}
// checkAndHandleExistingInstance checks if another instance is running and sends the URL to it
func checkAndHandleExistingInstance(urlSchemeRequest string) bool {
if urlSchemeRequest == "" {
return false
}
// Try to send URL to existing instance using wintray messaging
if wintray.CheckAndSendToExistingInstance(urlSchemeRequest) {
os.Exit(0)
return true
}
// No existing instance, we'll handle it ourselves
return false
}

27
app/cmd/app/menu.h Normal file
View File

@@ -0,0 +1,27 @@
#ifndef MENU_H
#define MENU_H
#ifdef __cplusplus
extern "C" {
#endif
typedef struct
{
char *label;
int enabled;
int separator;
} menuItem;
// TODO (jmorganca): these need to be forward declared in the webview.h file
// for now but ideally they should be in this header file on windows too
#ifndef WIN32
int menu_get_item_count();
void *menu_get_items();
void menu_handle_selection(char *item);
#endif
#ifdef __cplusplus
}
#endif
#endif

528
app/cmd/app/webview.go Normal file
View File

@@ -0,0 +1,528 @@
//go:build windows || darwin
package main
// #include "menu.h"
import "C"
import (
"encoding/base64"
"encoding/json"
"fmt"
"log/slog"
"net/http"
"os"
"path/filepath"
"runtime"
"strings"
"sync"
"time"
"unsafe"
"github.com/ollama/ollama/app/dialog"
"github.com/ollama/ollama/app/store"
"github.com/ollama/ollama/app/webview"
)
type Webview struct {
port int
token string
webview webview.WebView
mutex sync.Mutex
Store *store.Store
}
// Run initializes the webview and starts its event loop.
// Note: this must be called from the primary app thread
// This returns the OS native window handle to the caller
func (w *Webview) Run(path string) unsafe.Pointer {
var url string
if devMode {
// In development mode, use the local dev server
url = fmt.Sprintf("http://localhost:5173%s", path)
} else {
url = fmt.Sprintf("http://127.0.0.1:%d%s", w.port, path)
}
w.mutex.Lock()
defer w.mutex.Unlock()
if w.webview == nil {
// Note: turning on debug on macos throws errors but is marginally functional for debugging
// TODO (jmorganca): we should pre-create the window and then provide it here to
// webview so we can hide it from the start and make other modifications
wv := webview.New(debug)
// start the window hidden
hideWindow(wv.Window())
wv.SetTitle("Ollama")
// TODO (jmorganca): this isn't working yet since it needs to be set
// on the first page load, ideally in an interstitial page like `/token`
// that exists only to set the cookie and redirect to /
// wv.Init(fmt.Sprintf(`document.cookie = "token=%s; path=/"`, w.token))
init := `
// Disable reload
document.addEventListener('keydown', function(e) {
if ((e.ctrlKey || e.metaKey) && e.key === 'r') {
e.preventDefault();
return false;
}
});
// Prevent back/forward navigation
window.addEventListener('popstate', function(e) {
e.preventDefault();
history.pushState(null, '', window.location.pathname);
return false;
});
// Clear history on load
window.addEventListener('load', function() {
history.pushState(null, '', window.location.pathname);
window.history.replaceState(null, '', window.location.pathname);
});
// Set token cookie
document.cookie = "token=` + w.token + `; path=/";
`
// Windows-specific scrollbar styling
if runtime.GOOS == "windows" {
init += `
// Fix scrollbar styling for Edge WebView2 on Windows only
function updateScrollbarStyles() {
const isDark = window.matchMedia('(prefers-color-scheme: dark)').matches;
const existingStyle = document.getElementById('scrollbar-style');
if (existingStyle) existingStyle.remove();
const style = document.createElement('style');
style.id = 'scrollbar-style';
if (isDark) {
style.textContent = ` + "`" + `
::-webkit-scrollbar { width: 6px !important; height: 6px !important; }
::-webkit-scrollbar-track { background: #1a1a1a !important; }
::-webkit-scrollbar-thumb { background: #404040 !important; border-radius: 6px !important; }
::-webkit-scrollbar-thumb:hover { background: #505050 !important; }
::-webkit-scrollbar-corner { background: #1a1a1a !important; }
::-webkit-scrollbar-button {
background: transparent !important;
border: none !important;
width: 0px !important;
height: 0px !important;
margin: 0 !important;
padding: 0 !important;
}
::-webkit-scrollbar-button:vertical:start:decrement {
background: transparent !important;
height: 0px !important;
}
::-webkit-scrollbar-button:vertical:end:increment {
background: transparent !important;
height: 0px !important;
}
::-webkit-scrollbar-button:horizontal:start:decrement {
background: transparent !important;
width: 0px !important;
}
::-webkit-scrollbar-button:horizontal:end:increment {
background: transparent !important;
width: 0px !important;
}
` + "`" + `;
} else {
style.textContent = ` + "`" + `
::-webkit-scrollbar { width: 6px !important; height: 6px !important; }
::-webkit-scrollbar-track { background: #f0f0f0 !important; }
::-webkit-scrollbar-thumb { background: #c0c0c0 !important; border-radius: 6px !important; }
::-webkit-scrollbar-thumb:hover { background: #a0a0a0 !important; }
::-webkit-scrollbar-corner { background: #f0f0f0 !important; }
::-webkit-scrollbar-button {
background: transparent !important;
border: none !important;
width: 0px !important;
height: 0px !important;
margin: 0 !important;
padding: 0 !important;
}
::-webkit-scrollbar-button:vertical:start:decrement {
background: transparent !important;
height: 0px !important;
}
::-webkit-scrollbar-button:vertical:end:increment {
background: transparent !important;
height: 0px !important;
}
::-webkit-scrollbar-button:horizontal:start:decrement {
background: transparent !important;
width: 0px !important;
}
::-webkit-scrollbar-button:horizontal:end:increment {
background: transparent !important;
width: 0px !important;
}
` + "`" + `;
}
document.head.appendChild(style);
}
window.addEventListener('load', updateScrollbarStyles);
window.matchMedia('(prefers-color-scheme: dark)').addEventListener('change', updateScrollbarStyles);
`
}
// on windows make ctrl+n open new chat
// TODO (jmorganca): later we should use proper accelerators
// once we introduce a native menu for the window
// this is only used on windows since macOS uses the proper accelerators
if runtime.GOOS == "windows" {
init += `
document.addEventListener('keydown', function(e) {
if ((e.ctrlKey || e.metaKey) && e.key === 'n') {
e.preventDefault();
// Use the existing navigation method
history.pushState({}, '', '/c/new');
window.dispatchEvent(new PopStateEvent('popstate'));
return false;
}
});
`
}
init += `
window.OLLAMA_WEBSEARCH = true;
`
wv.Init(init)
// Add keyboard handler for zoom
wv.Init(`
window.addEventListener('keydown', function(e) {
// CMD/Ctrl + Plus/Equals (zoom in)
if ((e.metaKey || e.ctrlKey) && (e.key === '+' || e.key === '=')) {
e.preventDefault();
window.zoomIn && window.zoomIn();
return false;
}
// CMD/Ctrl + Minus (zoom out)
if ((e.metaKey || e.ctrlKey) && e.key === '-') {
e.preventDefault();
window.zoomOut && window.zoomOut();
return false;
}
// CMD/Ctrl + 0 (reset zoom)
if ((e.metaKey || e.ctrlKey) && e.key === '0') {
e.preventDefault();
window.zoomReset && window.zoomReset();
return false;
}
}, true);
`)
wv.Bind("zoomIn", func() {
current := wv.GetZoom()
wv.SetZoom(current + 0.1)
})
wv.Bind("zoomOut", func() {
current := wv.GetZoom()
wv.SetZoom(current - 0.1)
})
wv.Bind("zoomReset", func() {
wv.SetZoom(1.0)
})
wv.Bind("ready", func() {
showWindow(wv.Window())
})
wv.Bind("close", func() {
hideWindow(wv.Window())
})
// Webviews do not allow access to the file system by default, so we need to
// bind file system operations here
wv.Bind("selectModelsDirectory", func() {
go func() {
// Helper function to call the JavaScript callback with data or null
callCallback := func(data interface{}) {
dataJSON, _ := json.Marshal(data)
wv.Dispatch(func() {
wv.Eval(fmt.Sprintf("window.__selectModelsDirectoryCallback && window.__selectModelsDirectoryCallback(%s)", dataJSON))
})
}
directory, err := dialog.Directory().Title("Select Model Directory").ShowHidden(true).Browse()
if err != nil {
slog.Debug("Directory selection cancelled or failed", "error", err)
callCallback(nil)
return
}
slog.Debug("Directory selected", "path", directory)
callCallback(directory)
}()
})
// Bind selectFiles function for selecting multiple files at once
wv.Bind("selectFiles", func() {
go func() {
// Helper function to call the JavaScript callback with data or null
callCallback := func(data interface{}) {
dataJSON, _ := json.Marshal(data)
wv.Dispatch(func() {
wv.Eval(fmt.Sprintf("window.__selectFilesCallback && window.__selectFilesCallback(%s)", dataJSON))
})
}
// Define allowed extensions for native dialog filtering
textExts := []string{
"pdf", "docx", "txt", "md", "csv", "json", "xml", "html", "htm",
"js", "jsx", "ts", "tsx", "py", "java", "cpp", "c", "cc", "h", "cs", "php", "rb",
"go", "rs", "swift", "kt", "scala", "sh", "bat", "yaml", "yml", "toml", "ini",
"cfg", "conf", "log", "rtf",
}
imageExts := []string{"png", "jpg", "jpeg", "webp"}
allowedExts := append(textExts, imageExts...)
// Use native multiple file selection with extension filtering
filenames, err := dialog.File().
Filter("Supported Files", allowedExts...).
Title("Select Files").
LoadMultiple()
if err != nil {
slog.Debug("Multiple file selection cancelled or failed", "error", err)
callCallback(nil)
return
}
if len(filenames) == 0 {
callCallback(nil)
return
}
var files []map[string]string
maxFileSize := int64(10 * 1024 * 1024) // 10MB
for _, filename := range filenames {
// Check file extension (double-check after native dialog filtering)
ext := strings.ToLower(strings.TrimPrefix(filepath.Ext(filename), "."))
validExt := false
for _, allowedExt := range allowedExts {
if ext == allowedExt {
validExt = true
break
}
}
if !validExt {
slog.Warn("file extension not allowed, skipping", "filename", filepath.Base(filename), "extension", ext)
continue
}
// Check file size before reading (pre-filter large files)
fileStat, err := os.Stat(filename)
if err != nil {
slog.Error("failed to get file info", "error", err, "filename", filename)
continue
}
if fileStat.Size() > maxFileSize {
slog.Warn("file too large, skipping", "filename", filepath.Base(filename), "size", fileStat.Size())
continue
}
fileBytes, err := os.ReadFile(filename)
if err != nil {
slog.Error("failed to read file", "error", err, "filename", filename)
continue
}
mimeType := http.DetectContentType(fileBytes)
dataURL := fmt.Sprintf("data:%s;base64,%s", mimeType, base64.StdEncoding.EncodeToString(fileBytes))
fileResult := map[string]string{
"filename": filepath.Base(filename),
"path": filename,
"dataURL": dataURL,
}
files = append(files, fileResult)
}
if len(files) == 0 {
callCallback(nil)
} else {
callCallback(files)
}
}()
})
wv.Bind("drag", func() {
wv.Dispatch(func() {
drag(wv.Window())
})
})
wv.Bind("doubleClick", func() {
wv.Dispatch(func() {
doubleClick(wv.Window())
})
})
// Add binding for working directory selection
wv.Bind("selectWorkingDirectory", func() {
go func() {
// Helper function to call the JavaScript callback with data or null
callCallback := func(data interface{}) {
dataJSON, _ := json.Marshal(data)
wv.Dispatch(func() {
wv.Eval(fmt.Sprintf("window.__selectWorkingDirectoryCallback && window.__selectWorkingDirectoryCallback(%s)", dataJSON))
})
}
directory, err := dialog.Directory().Title("Select Working Directory").ShowHidden(true).Browse()
if err != nil {
slog.Debug("Directory selection cancelled or failed", "error", err)
callCallback(nil)
return
}
slog.Debug("Directory selected", "path", directory)
callCallback(directory)
}()
})
wv.Bind("setContextMenuItems", func(items []map[string]interface{}) error {
menuMutex.Lock()
defer menuMutex.Unlock()
if len(menuItems) > 0 {
pinner.Unpin()
}
menuItems = nil
for _, item := range items {
menuItem := C.menuItem{
label: C.CString(item["label"].(string)),
enabled: 0,
separator: 0,
}
if item["enabled"] != nil {
menuItem.enabled = 1
}
if item["separator"] != nil {
menuItem.separator = 1
}
menuItems = append(menuItems, menuItem)
}
return nil
})
// Debounce resize events
var resizeTimer *time.Timer
var resizeMutex sync.Mutex
wv.Bind("resize", func(width, height int) {
if w.Store != nil {
resizeMutex.Lock()
if resizeTimer != nil {
resizeTimer.Stop()
}
resizeTimer = time.AfterFunc(100*time.Millisecond, func() {
err := w.Store.SetWindowSize(width, height)
if err != nil {
slog.Error("failed to set window size", "error", err)
}
})
resizeMutex.Unlock()
}
})
// On Darwin, we can't have 2 threads both running global event loops
// but on Windows, the event loops are tied to the window, so we're
// able to run in both the tray and webview
if runtime.GOOS != "darwin" {
slog.Debug("starting webview event loop")
go func() {
wv.Run()
slog.Debug("webview event loop exited")
}()
}
if w.Store != nil {
width, height, err := w.Store.WindowSize()
if err != nil {
slog.Error("failed to get window size", "error", err)
}
if width > 0 && height > 0 {
wv.SetSize(width, height, webview.HintNone)
} else {
wv.SetSize(800, 600, webview.HintNone)
}
}
wv.SetSize(800, 600, webview.HintMin)
w.webview = wv
w.webview.Navigate(url)
} else {
w.webview.Eval(fmt.Sprintf(`
history.pushState({}, '', '%s');
`, path))
showWindow(w.webview.Window())
}
return w.webview.Window()
}
func (w *Webview) Terminate() {
w.mutex.Lock()
if w.webview == nil {
w.mutex.Unlock()
return
}
wv := w.webview
w.webview = nil
w.mutex.Unlock()
wv.Terminate()
wv.Destroy()
}
func (w *Webview) IsRunning() bool {
w.mutex.Lock()
defer w.mutex.Unlock()
return w.webview != nil
}
var (
menuItems []C.menuItem
menuMutex sync.RWMutex
pinner runtime.Pinner
)
//export menu_get_item_count
func menu_get_item_count() C.int {
menuMutex.RLock()
defer menuMutex.RUnlock()
return C.int(len(menuItems))
}
//export menu_get_items
func menu_get_items() unsafe.Pointer {
menuMutex.RLock()
defer menuMutex.RUnlock()
if len(menuItems) == 0 {
return nil
}
// Return pointer to the slice data
pinner.Pin(&menuItems[0])
return unsafe.Pointer(&menuItems[0])
}
//export menu_handle_selection
func menu_handle_selection(item *C.char) {
wv.webview.Eval(fmt.Sprintf("window.handleContextMenuResult('%s')", C.GoString(item)))
}

View File

@@ -0,0 +1,40 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>CFBundleDevelopmentRegion</key>
<string>English</string>
<key>CFBundleExecutable</key>
<string>Squirrel</string>
<key>CFBundleIconFile</key>
<string/>
<key>CFBundleIdentifier</key>
<string>com.github.Squirrel</string>
<key>CFBundleInfoDictionaryVersion</key>
<string>6.0</string>
<key>CFBundleName</key>
<string>Squirrel</string>
<key>CFBundlePackageType</key>
<string>FMWK</string>
<key>CFBundleShortVersionString</key>
<string>1.0</string>
<key>CFBundleSignature</key>
<string>????</string>
<key>CFBundleVersion</key>
<string>1</string>
<key>DTCompiler</key>
<string>com.apple.compilers.llvm.clang.1_0</string>
<key>DTSDKBuild</key>
<string>22E245</string>
<key>DTSDKName</key>
<string>macosx13.3</string>
<key>DTXcode</key>
<string>1431</string>
<key>DTXcodeBuild</key>
<string>14E300c</string>
<key>NSHumanReadableCopyright</key>
<string>Copyright © 2013 GitHub. All rights reserved.</string>
<key>NSPrincipalClass</key>
<string/>
</dict>
</plist>

View File

@@ -0,0 +1,51 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>CFBundleDisplayName</key>
<string>Ollama</string>
<key>CFBundleExecutable</key>
<string>Ollama</string>
<key>CFBundleIconFile</key>
<string>icon.icns</string>
<key>CFBundleIdentifier</key>
<string>com.electron.ollama</string>
<key>CFBundleInfoDictionaryVersion</key>
<string>6.0</string>
<key>CFBundleName</key>
<string>Ollama</string>
<key>CFBundlePackageType</key>
<string>APPL</string>
<key>CFBundleShortVersionString</key>
<string>0.0.0</string>
<key>CFBundleVersion</key>
<string>0.0.0</string>
<key>DTCompiler</key>
<string>com.apple.compilers.llvm.clang.1_0</string>
<key>DTSDKBuild</key>
<string>22E245</string>
<key>DTSDKName</key>
<string>macosx14.0</string>
<key>DTXcode</key>
<string>1431</string>
<key>DTXcodeBuild</key>
<string>14E300c</string>
<key>LSApplicationCategoryType</key>
<string>public.app-category.developer-tools</string>
<key>LSMinimumSystemVersion</key>
<string>14.0</string>
<key>LSUIElement</key>
<true/>
<key>CFBundleURLTypes</key>
<array>
<dict>
<key>CFBundleURLName</key>
<string>Ollama URL</string>
<key>CFBundleURLSchemes</key>
<array>
<string>ollama</string>
</array>
</dict>
</array>
</dict>
</plist>

View File

@@ -0,0 +1,25 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.ollama.ollama</string>
<key>BundleProgram</key>
<string>Contents/Frameworks/Squirrel.framework/Versions/A/Squirrel</string>
<key>ProgramArguments</key>
<array>
<string>Contents/Frameworks/Squirrel.framework/Versions/A/Squirrel</string>
<string>background</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>LimitLoadToSessionType</key>
<string>Aqua</string>
<key>POSIXSpawnType</key>
<string>Interactive</string>
<key>LSUIElement</key>
<true/>
<key>LSBackgroundOnly</key>
<false/>
</dict>
</plist>

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 374 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 661 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 363 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 745 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 381 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 648 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 412 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 771 B

15
app/dialog/LICENSE Normal file
View File

@@ -0,0 +1,15 @@
ISC License
Copyright (c) 2018, the dialog authors.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

43
app/dialog/cocoa/dlg.h Normal file
View File

@@ -0,0 +1,43 @@
#include <objc/NSObjCRuntime.h>
typedef enum {
MSG_YESNO,
MSG_ERROR,
MSG_INFO,
} AlertStyle;
typedef struct {
char* msg;
char* title;
AlertStyle style;
} AlertDlgParams;
#define LOADDLG 0
#define SAVEDLG 1
#define DIRDLG 2 // browse for directory
typedef struct {
int mode; /* which dialog style to invoke (see earlier defines) */
char* buf; /* buffer to store selected file */
int nbuf; /* number of bytes allocated at buf */
char* title; /* title for dialog box (can be nil) */
void** exts; /* list of valid extensions (elements actual type is NSString*) */
int numext; /* number of items in exts */
int relaxext; /* allow other extensions? */
char* startDir; /* directory to start in (can be nil) */
char* filename; /* default filename for dialog box (can be nil) */
int showHidden; /* show hidden files? */
int allowMultiple; /* allow multiple file selection? */
} FileDlgParams;
typedef enum {
DLG_OK,
DLG_CANCEL,
DLG_URLFAIL,
} DlgResult;
DlgResult alertDlg(AlertDlgParams*);
DlgResult fileDlg(FileDlgParams*);
void* NSStr(void* buf, int len);
void NSRelease(void* obj);

208
app/dialog/cocoa/dlg.m Normal file
View File

@@ -0,0 +1,208 @@
#import <Cocoa/Cocoa.h>
#include "dlg.h"
#include <string.h>
#include <sys/syslimits.h>
// Import UniformTypeIdentifiers for macOS 11+
#if __MAC_OS_X_VERSION_MAX_ALLOWED >= 110000
#import <UniformTypeIdentifiers/UniformTypeIdentifiers.h>
#endif
void* NSStr(void* buf, int len) {
return (void*)[[NSString alloc] initWithBytes:buf length:len encoding:NSUTF8StringEncoding];
}
void checkActivationPolicy() {
NSApplicationActivationPolicy policy = [NSApp activationPolicy];
// prohibited NSApp will not show the panel at all.
// It probably means that this is not run in a GUI app, that would set the policy on its own,
// but in a terminal app - setting it to accessory will allow dialogs to show
if (policy == NSApplicationActivationPolicyProhibited) {
[NSApp setActivationPolicy:NSApplicationActivationPolicyAccessory];
}
}
void NSRelease(void* obj) {
[(NSObject*)obj release];
}
@interface AlertDlg : NSObject {
AlertDlgParams* params;
DlgResult result;
}
+ (AlertDlg*)init:(AlertDlgParams*)params;
- (DlgResult)run;
@end
DlgResult alertDlg(AlertDlgParams* params) {
return [[AlertDlg init:params] run];
}
@implementation AlertDlg
+ (AlertDlg*)init:(AlertDlgParams*)params {
AlertDlg* d = [AlertDlg alloc];
d->params = params;
return d;
}
- (DlgResult)run {
if(![NSThread isMainThread]) {
[self performSelectorOnMainThread:@selector(run) withObject:nil waitUntilDone:YES];
return self->result;
}
NSAlert* alert = [[NSAlert alloc] init];
if(self->params->title != nil) {
[[alert window] setTitle:[[NSString alloc] initWithUTF8String:self->params->title]];
}
[alert setMessageText:[[NSString alloc] initWithUTF8String:self->params->msg]];
switch (self->params->style) {
case MSG_YESNO:
[alert addButtonWithTitle:@"Yes"];
[alert addButtonWithTitle:@"No"];
break;
case MSG_ERROR:
[alert setIcon:[NSImage imageNamed:NSImageNameCaution]];
[alert addButtonWithTitle:@"OK"];
break;
case MSG_INFO:
[alert setIcon:[NSImage imageNamed:NSImageNameInfo]];
[alert addButtonWithTitle:@"OK"];
break;
}
checkActivationPolicy();
self->result = [alert runModal] == NSAlertFirstButtonReturn ? DLG_OK : DLG_CANCEL;
return self->result;
}
@end
@interface FileDlg : NSObject {
FileDlgParams* params;
DlgResult result;
}
+ (FileDlg*)init:(FileDlgParams*)params;
- (DlgResult)run;
@end
DlgResult fileDlg(FileDlgParams* params) {
return [[FileDlg init:params] run];
}
@implementation FileDlg
+ (FileDlg*)init:(FileDlgParams*)params {
FileDlg* d = [FileDlg alloc];
d->params = params;
return d;
}
- (DlgResult)run {
if(![NSThread isMainThread]) {
[self performSelectorOnMainThread:@selector(run) withObject:nil waitUntilDone:YES];
} else if(self->params->mode == SAVEDLG) {
self->result = [self save];
} else {
self->result = [self load];
}
return self->result;
}
- (NSInteger)runPanel:(NSSavePanel*)panel {
[panel setFloatingPanel:YES];
[panel setShowsHiddenFiles:self->params->showHidden ? YES : NO];
[panel setCanCreateDirectories:YES];
if(self->params->title != nil) {
[panel setTitle:[[NSString alloc] initWithUTF8String:self->params->title]];
}
// Use modern allowedContentTypes API for better file type support (especially video files)
if(self->params->numext > 0) {
NSMutableArray *utTypes = [NSMutableArray arrayWithCapacity:self->params->numext];
NSString** exts = (NSString**)self->params->exts;
for(int i = 0; i < self->params->numext; i++) {
UTType *type = [UTType typeWithFilenameExtension:exts[i]];
if(type) {
[utTypes addObject:type];
}
}
if([utTypes count] > 0) {
[panel setAllowedContentTypes:utTypes];
}
}
if(self->params->relaxext) {
[panel setAllowsOtherFileTypes:YES];
}
if(self->params->startDir) {
[panel setDirectoryURL:[NSURL URLWithString:[[NSString alloc] initWithUTF8String:self->params->startDir]]];
}
if(self->params->filename != nil) {
[panel setNameFieldStringValue:[[NSString alloc] initWithUTF8String:self->params->filename]];
}
checkActivationPolicy();
return [panel runModal];
}
- (DlgResult)save {
NSSavePanel* panel = [NSSavePanel savePanel];
if(![self runPanel:panel]) {
return DLG_CANCEL;
} else if(![[panel URL] getFileSystemRepresentation:self->params->buf maxLength:self->params->nbuf]) {
return DLG_URLFAIL;
}
return DLG_OK;
}
- (DlgResult)load {
NSOpenPanel* panel = [NSOpenPanel openPanel];
if(self->params->mode == DIRDLG) {
[panel setCanChooseDirectories:YES];
[panel setCanChooseFiles:NO];
}
if(self->params->allowMultiple) {
[panel setAllowsMultipleSelection:YES];
}
if(![self runPanel:panel]) {
return DLG_CANCEL;
}
NSArray* urls = [panel URLs];
if(self->params->allowMultiple && [urls count] >= 1) {
// For multiple files, we need to return all paths separated by null bytes
char* bufPtr = self->params->buf;
int remainingBuf = self->params->nbuf;
// Calculate total required buffer size first
int totalSize = 0;
for(NSURL* url in urls) {
char tempBuf[PATH_MAX];
if(![url getFileSystemRepresentation:tempBuf maxLength:PATH_MAX]) {
return DLG_URLFAIL;
}
totalSize += strlen(tempBuf) + 1; // +1 for null terminator
}
totalSize += 1; // Final null terminator
if(totalSize > self->params->nbuf) {
// Not enough buffer space
return DLG_URLFAIL;
}
// Now actually copy the paths (we know we have space)
bufPtr = self->params->buf;
for(NSURL* url in urls) {
char tempBuf[PATH_MAX];
[url getFileSystemRepresentation:tempBuf maxLength:PATH_MAX];
int pathLen = strlen(tempBuf);
strcpy(bufPtr, tempBuf);
bufPtr += pathLen + 1;
}
*bufPtr = '\0'; // Final null terminator
}
return DLG_OK;
}
@end

View File

@@ -0,0 +1,183 @@
package cocoa
// #cgo darwin LDFLAGS: -framework Cocoa -framework UniformTypeIdentifiers
// #include <stdlib.h>
// #include <sys/syslimits.h>
// #include "dlg.h"
import "C"
import (
"bytes"
"errors"
"unsafe"
)
type AlertParams struct {
p C.AlertDlgParams
}
func mkAlertParams(msg, title string, style C.AlertStyle) *AlertParams {
a := AlertParams{C.AlertDlgParams{msg: C.CString(msg), style: style}}
if title != "" {
a.p.title = C.CString(title)
}
return &a
}
func (a *AlertParams) run() C.DlgResult {
return C.alertDlg(&a.p)
}
func (a *AlertParams) free() {
C.free(unsafe.Pointer(a.p.msg))
if a.p.title != nil {
C.free(unsafe.Pointer(a.p.title))
}
}
func nsStr(s string) unsafe.Pointer {
return C.NSStr(unsafe.Pointer(&[]byte(s)[0]), C.int(len(s)))
}
func YesNoDlg(msg, title string) bool {
a := mkAlertParams(msg, title, C.MSG_YESNO)
defer a.free()
return a.run() == C.DLG_OK
}
func InfoDlg(msg, title string) {
a := mkAlertParams(msg, title, C.MSG_INFO)
defer a.free()
a.run()
}
func ErrorDlg(msg, title string) {
a := mkAlertParams(msg, title, C.MSG_ERROR)
defer a.free()
a.run()
}
const (
BUFSIZE = C.PATH_MAX
MULTI_FILE_BUF_SIZE = 32768
)
// MultiFileDlg opens a file dialog that allows multiple file selection
func MultiFileDlg(title string, exts []string, relaxExt bool, startDir string, showHidden bool) ([]string, error) {
return fileDlgWithOptions(C.LOADDLG, title, exts, relaxExt, startDir, "", showHidden, true)
}
// FileDlg opens a file dialog for single file selection (kept for compatibility)
func FileDlg(save bool, title string, exts []string, relaxExt bool, startDir string, filename string, showHidden bool) (string, error) {
mode := C.LOADDLG
if save {
mode = C.SAVEDLG
}
files, err := fileDlgWithOptions(mode, title, exts, relaxExt, startDir, filename, showHidden, false)
if err != nil {
return "", err
}
if len(files) == 0 {
return "", nil
}
return files[0], nil
}
func DirDlg(title string, startDir string, showHidden bool) (string, error) {
files, err := fileDlgWithOptions(C.DIRDLG, title, nil, false, startDir, "", showHidden, false)
if err != nil {
return "", err
}
if len(files) == 0 {
return "", nil
}
return files[0], nil
}
// fileDlgWithOptions is the unified file dialog function that handles both single and multiple selection
func fileDlgWithOptions(mode int, title string, exts []string, relaxExt bool, startDir, filename string, showHidden, allowMultiple bool) ([]string, error) {
// Use larger buffer for multiple files, smaller for single
bufSize := BUFSIZE
if allowMultiple {
bufSize = MULTI_FILE_BUF_SIZE
}
p := C.FileDlgParams{
mode: C.int(mode),
nbuf: C.int(bufSize),
}
if allowMultiple {
p.allowMultiple = C.int(1) // Enable multiple selection //nolint:structcheck
}
if showHidden {
p.showHidden = 1
}
p.buf = (*C.char)(C.malloc(C.size_t(bufSize)))
defer C.free(unsafe.Pointer(p.buf))
buf := (*(*[MULTI_FILE_BUF_SIZE]byte)(unsafe.Pointer(p.buf)))[:bufSize]
if title != "" {
p.title = C.CString(title)
defer C.free(unsafe.Pointer(p.title))
}
if startDir != "" {
p.startDir = C.CString(startDir)
defer C.free(unsafe.Pointer(p.startDir))
}
if filename != "" {
p.filename = C.CString(filename)
defer C.free(unsafe.Pointer(p.filename))
}
if len(exts) > 0 {
if len(exts) > 999 {
panic("more than 999 extensions not supported")
}
ptrSize := int(unsafe.Sizeof(&title))
p.exts = (*unsafe.Pointer)(C.malloc(C.size_t(ptrSize * len(exts))))
defer C.free(unsafe.Pointer(p.exts))
cext := (*(*[999]unsafe.Pointer)(unsafe.Pointer(p.exts)))[:]
for i, ext := range exts {
cext[i] = nsStr(ext)
defer C.NSRelease(cext[i])
}
p.numext = C.int(len(exts))
if relaxExt {
p.relaxext = 1
}
}
// Execute dialog and parse results
switch C.fileDlg(&p) {
case C.DLG_OK:
if allowMultiple {
// Parse multiple null-terminated strings from buffer
var files []string
start := 0
for i := range len(buf) - 1 {
if buf[i] == 0 {
if i > start {
files = append(files, string(buf[start:i]))
}
start = i + 1
// Check for double null (end of list)
if i+1 < len(buf) && buf[i+1] == 0 {
break
}
}
}
return files, nil
} else {
// Single file - return as array for consistency
filename := string(buf[:bytes.Index(buf, []byte{0})])
return []string{filename}, nil
}
case C.DLG_CANCEL:
return nil, nil
case C.DLG_URLFAIL:
return nil, errors.New("failed to get file-system representation for selected URL")
}
panic("unhandled case")
}

182
app/dialog/dlgs.go Normal file
View File

@@ -0,0 +1,182 @@
//go:build windows || darwin
// Package dialog provides a simple cross-platform common dialog API.
// Eg. to prompt the user with a yes/no dialog:
//
// if dialog.MsgDlg("%s", "Do you want to continue?").YesNo() {
// // user pressed Yes
// }
//
// The general usage pattern is to call one of the toplevel *Dlg functions
// which return a *Builder structure. From here you can optionally call
// configuration functions (eg. Title) to customise the dialog, before
// using a launcher function to run the dialog.
package dialog
import (
"errors"
"fmt"
)
// ErrCancelled is an error returned when a user cancels/closes a dialog.
var ErrCancelled = errors.New("Cancelled")
// Cancelled refers to ErrCancelled.
// Deprecated: Use ErrCancelled instead.
var Cancelled = ErrCancelled
// Dlg is the common type for dialogs.
type Dlg struct {
Title string
}
// MsgBuilder is used for creating message boxes.
type MsgBuilder struct {
Dlg
Msg string
}
// Message initialises a MsgBuilder with the provided message.
func Message(format string, args ...interface{}) *MsgBuilder {
return &MsgBuilder{Msg: fmt.Sprintf(format, args...)}
}
// Title specifies what the title of the message dialog will be.
func (b *MsgBuilder) Title(title string) *MsgBuilder {
b.Dlg.Title = title
return b
}
// YesNo spawns the message dialog with two buttons, "Yes" and "No".
// Returns true iff the user selected "Yes".
func (b *MsgBuilder) YesNo() bool {
return b.yesNo()
}
// Info spawns the message dialog with an information icon and single button, "Ok".
func (b *MsgBuilder) Info() {
b.info()
}
// Error spawns the message dialog with an error icon and single button, "Ok".
func (b *MsgBuilder) Error() {
b.error()
}
// FileFilter represents a category of files (eg. audio files, spreadsheets).
type FileFilter struct {
Desc string
Extensions []string
}
// FileBuilder is used for creating file browsing dialogs.
type FileBuilder struct {
Dlg
StartDir string
StartFile string
Filters []FileFilter
ShowHiddenFiles bool
}
// File initialises a FileBuilder using the default configuration.
func File() *FileBuilder {
return &FileBuilder{}
}
// Title specifies the title to be used for the dialog.
func (b *FileBuilder) Title(title string) *FileBuilder {
b.Dlg.Title = title
return b
}
// Filter adds a category of files to the types allowed by the dialog. Multiple
// calls to Filter are cumulative - any of the provided categories will be allowed.
// By default all files can be selected.
//
// The special extension '*' allows all files to be selected when the Filter is active.
func (b *FileBuilder) Filter(desc string, extensions ...string) *FileBuilder {
filt := FileFilter{desc, extensions}
if len(filt.Extensions) == 0 {
filt.Extensions = append(filt.Extensions, "*")
}
b.Filters = append(b.Filters, filt)
return b
}
// SetStartDir specifies the initial directory of the dialog.
func (b *FileBuilder) SetStartDir(startDir string) *FileBuilder {
b.StartDir = startDir
return b
}
// SetStartFile specifies the initial file name of the dialog.
func (b *FileBuilder) SetStartFile(startFile string) *FileBuilder {
b.StartFile = startFile
return b
}
// ShowHiddenFiles sets whether hidden files should be visible in the dialog.
func (b *FileBuilder) ShowHidden(show bool) *FileBuilder {
b.ShowHiddenFiles = show
return b
}
// Load spawns the file selection dialog using the configured settings,
// asking the user to select a single file. Returns ErrCancelled as the error
// if the user cancels or closes the dialog.
func (b *FileBuilder) Load() (string, error) {
return b.load()
}
// LoadMultiple spawns the file selection dialog using the configured settings,
// asking the user to select multiple files. Returns ErrCancelled as the error
// if the user cancels or closes the dialog.
func (b *FileBuilder) LoadMultiple() ([]string, error) {
return b.loadMultiple()
}
// Save spawns the file selection dialog using the configured settings,
// asking the user for a filename to save as. If the chosen file exists, the
// user is prompted whether they want to overwrite the file. Returns
// ErrCancelled as the error if the user cancels/closes the dialog, or selects
// not to overwrite the file.
func (b *FileBuilder) Save() (string, error) {
return b.save()
}
// DirectoryBuilder is used for directory browse dialogs.
type DirectoryBuilder struct {
Dlg
StartDir string
ShowHiddenFiles bool
}
// Directory initialises a DirectoryBuilder using the default configuration.
func Directory() *DirectoryBuilder {
return &DirectoryBuilder{}
}
// Browse spawns the directory selection dialog using the configured settings,
// asking the user to select a single folder. Returns ErrCancelled as the error
// if the user cancels or closes the dialog.
func (b *DirectoryBuilder) Browse() (string, error) {
return b.browse()
}
// Title specifies the title to be used for the dialog.
func (b *DirectoryBuilder) Title(title string) *DirectoryBuilder {
b.Dlg.Title = title
return b
}
// StartDir specifies the initial directory to be used for the dialog.
func (b *DirectoryBuilder) SetStartDir(dir string) *DirectoryBuilder {
b.StartDir = dir
return b
}
// ShowHiddenFiles sets whether hidden files should be visible in the dialog.
func (b *DirectoryBuilder) ShowHidden(show bool) *DirectoryBuilder {
b.ShowHiddenFiles = show
return b
}

82
app/dialog/dlgs_darwin.go Normal file
View File

@@ -0,0 +1,82 @@
package dialog
import (
"github.com/ollama/ollama/app/dialog/cocoa"
)
func (b *MsgBuilder) yesNo() bool {
return cocoa.YesNoDlg(b.Msg, b.Dlg.Title)
}
func (b *MsgBuilder) info() {
cocoa.InfoDlg(b.Msg, b.Dlg.Title)
}
func (b *MsgBuilder) error() {
cocoa.ErrorDlg(b.Msg, b.Dlg.Title)
}
func (b *FileBuilder) load() (string, error) {
return b.run(false)
}
func (b *FileBuilder) loadMultiple() ([]string, error) {
return b.runMultiple()
}
func (b *FileBuilder) save() (string, error) {
return b.run(true)
}
func (b *FileBuilder) run(save bool) (string, error) {
star := false
var exts []string
for _, filt := range b.Filters {
for _, ext := range filt.Extensions {
if ext == "*" {
star = true
} else {
exts = append(exts, ext)
}
}
}
if star && save {
/* OSX doesn't allow the user to switch visible file types/extensions. Also
** NSSavePanel's allowsOtherFileTypes property has no effect for an open
** dialog, so if "*" is a possible extension we must always show all files. */
exts = nil
}
f, err := cocoa.FileDlg(save, b.Dlg.Title, exts, star, b.StartDir, b.StartFile, b.ShowHiddenFiles)
if f == "" && err == nil {
return "", ErrCancelled
}
return f, err
}
func (b *FileBuilder) runMultiple() ([]string, error) {
star := false
var exts []string
for _, filt := range b.Filters {
for _, ext := range filt.Extensions {
if ext == "*" {
star = true
} else {
exts = append(exts, ext)
}
}
}
files, err := cocoa.MultiFileDlg(b.Dlg.Title, exts, star, b.StartDir, b.ShowHiddenFiles)
if len(files) == 0 && err == nil {
return nil, ErrCancelled
}
return files, err
}
func (b *DirectoryBuilder) browse() (string, error) {
f, err := cocoa.DirDlg(b.Dlg.Title, b.StartDir, b.ShowHiddenFiles)
if f == "" && err == nil {
return "", ErrCancelled
}
return f, err
}

241
app/dialog/dlgs_windows.go Normal file
View File

@@ -0,0 +1,241 @@
package dialog
import (
"fmt"
"reflect"
"syscall"
"unicode/utf16"
"unsafe"
"github.com/TheTitanrain/w32"
)
const multiFileBufferSize = w32.MAX_PATH * 10
type WinDlgError int
func (e WinDlgError) Error() string {
return fmt.Sprintf("CommDlgExtendedError: %#x", e)
}
func err() error {
e := w32.CommDlgExtendedError()
if e == 0 {
return ErrCancelled
}
return WinDlgError(e)
}
func (b *MsgBuilder) yesNo() bool {
r := w32.MessageBox(w32.HWND(0), b.Msg, firstOf(b.Dlg.Title, "Confirm?"), w32.MB_YESNO)
return r == w32.IDYES
}
func (b *MsgBuilder) info() {
w32.MessageBox(w32.HWND(0), b.Msg, firstOf(b.Dlg.Title, "Information"), w32.MB_OK|w32.MB_ICONINFORMATION)
}
func (b *MsgBuilder) error() {
w32.MessageBox(w32.HWND(0), b.Msg, firstOf(b.Dlg.Title, "Error"), w32.MB_OK|w32.MB_ICONERROR)
}
type filedlg struct {
buf []uint16
filters []uint16
opf *w32.OPENFILENAME
}
func (d filedlg) Filename() string {
i := 0
for i < len(d.buf) && d.buf[i] != 0 {
i++
}
return string(utf16.Decode(d.buf[:i]))
}
func (d filedlg) parseMultipleFilenames() []string {
var files []string
i := 0
// Find first null terminator (directory path)
for i < len(d.buf) && d.buf[i] != 0 {
i++
}
if i >= len(d.buf) {
return files
}
// Get directory path
dirPath := string(utf16.Decode(d.buf[:i]))
i++ // Skip null terminator
// Check if there are more files (multiple selection)
if i < len(d.buf) && d.buf[i] != 0 {
// Multiple files selected - parse filenames
for i < len(d.buf) {
start := i
// Find next null terminator
for i < len(d.buf) && d.buf[i] != 0 {
i++
}
if i >= len(d.buf) {
break
}
if start < i {
filename := string(utf16.Decode(d.buf[start:i]))
if dirPath != "" {
files = append(files, dirPath+"\\"+filename)
} else {
files = append(files, filename)
}
}
i++ // Skip null terminator
if i >= len(d.buf) || d.buf[i] == 0 {
break // End of list
}
}
} else {
// Single file selected
files = append(files, dirPath)
}
return files
}
func (b *FileBuilder) load() (string, error) {
d := openfile(w32.OFN_FILEMUSTEXIST|w32.OFN_NOCHANGEDIR, b)
if w32.GetOpenFileName(d.opf) {
return d.Filename(), nil
}
return "", err()
}
func (b *FileBuilder) loadMultiple() ([]string, error) {
d := openfile(w32.OFN_FILEMUSTEXIST|w32.OFN_NOCHANGEDIR|w32.OFN_ALLOWMULTISELECT|w32.OFN_EXPLORER, b)
d.buf = make([]uint16, multiFileBufferSize)
d.opf.File = utf16ptr(d.buf)
d.opf.MaxFile = uint32(len(d.buf))
if w32.GetOpenFileName(d.opf) {
return d.parseMultipleFilenames(), nil
}
return nil, err()
}
func (b *FileBuilder) save() (string, error) {
d := openfile(w32.OFN_OVERWRITEPROMPT|w32.OFN_NOCHANGEDIR, b)
if w32.GetSaveFileName(d.opf) {
return d.Filename(), nil
}
return "", err()
}
/* syscall.UTF16PtrFromString not sufficient because we need to encode embedded NUL bytes */
func utf16ptr(utf16 []uint16) *uint16 {
if utf16[len(utf16)-1] != 0 {
panic("refusing to make ptr to non-NUL terminated utf16 slice")
}
h := (*reflect.SliceHeader)(unsafe.Pointer(&utf16))
return (*uint16)(unsafe.Pointer(h.Data))
}
func utf16slice(ptr *uint16) []uint16 { //nolint:unused
hdr := reflect.SliceHeader{Data: uintptr(unsafe.Pointer(ptr)), Len: 1, Cap: 1}
slice := *((*[]uint16)(unsafe.Pointer(&hdr))) //nolint:govet
i := 0
for slice[len(slice)-1] != 0 {
i++
}
hdr.Len = i
slice = *((*[]uint16)(unsafe.Pointer(&hdr))) //nolint:govet
return slice
}
func openfile(flags uint32, b *FileBuilder) (d filedlg) {
d.buf = make([]uint16, w32.MAX_PATH)
if b.StartFile != "" {
initialName, _ := syscall.UTF16FromString(b.StartFile)
for i := 0; i < len(initialName) && i < w32.MAX_PATH; i++ {
d.buf[i] = initialName[i]
}
}
d.opf = &w32.OPENFILENAME{
File: utf16ptr(d.buf),
MaxFile: uint32(len(d.buf)),
Flags: flags,
}
d.opf.StructSize = uint32(unsafe.Sizeof(*d.opf))
if b.StartDir != "" {
d.opf.InitialDir, _ = syscall.UTF16PtrFromString(b.StartDir)
}
if b.Dlg.Title != "" {
d.opf.Title, _ = syscall.UTF16PtrFromString(b.Dlg.Title)
}
for _, filt := range b.Filters {
/* build utf16 string of form "Music File\0*.mp3;*.ogg;*.wav;\0" */
d.filters = append(d.filters, utf16.Encode([]rune(filt.Desc))...)
d.filters = append(d.filters, 0)
for _, ext := range filt.Extensions {
s := fmt.Sprintf("*.%s;", ext)
d.filters = append(d.filters, utf16.Encode([]rune(s))...)
}
d.filters = append(d.filters, 0)
}
if d.filters != nil {
d.filters = append(d.filters, 0, 0) // two extra NUL chars to terminate the list
d.opf.Filter = utf16ptr(d.filters)
}
return d
}
type dirdlg struct {
bi *w32.BROWSEINFO
}
const (
bffm_INITIALIZED = 1
bffm_SELCHANGED = 2
bffm_VALIDATEFAILEDA = 3
bffm_VALIDATEFAILEDW = 4
bffm_SETSTATUSTEXTA = (w32.WM_USER + 100)
bffm_SETSTATUSTEXTW = (w32.WM_USER + 104)
bffm_ENABLEOK = (w32.WM_USER + 101)
bffm_SETSELECTIONA = (w32.WM_USER + 102)
bffm_SETSELECTIONW = (w32.WM_USER + 103)
bffm_SETOKTEXT = (w32.WM_USER + 105)
bffm_SETEXPANDED = (w32.WM_USER + 106)
bffm_SETSTATUSTEXT = bffm_SETSTATUSTEXTW
bffm_SETSELECTION = bffm_SETSELECTIONW
bffm_VALIDATEFAILED = bffm_VALIDATEFAILEDW
)
func callbackDefaultDir(hwnd w32.HWND, msg uint, lParam, lpData uintptr) int {
if msg == bffm_INITIALIZED {
_ = w32.SendMessage(hwnd, bffm_SETSELECTION, w32.TRUE, lpData)
}
return 0
}
func selectdir(b *DirectoryBuilder) (d dirdlg) {
d.bi = &w32.BROWSEINFO{Flags: w32.BIF_RETURNONLYFSDIRS | w32.BIF_NEWDIALOGSTYLE}
if b.Dlg.Title != "" {
d.bi.Title, _ = syscall.UTF16PtrFromString(b.Dlg.Title)
}
if b.StartDir != "" {
s16, _ := syscall.UTF16PtrFromString(b.StartDir)
d.bi.LParam = uintptr(unsafe.Pointer(s16))
d.bi.CallbackFunc = syscall.NewCallback(callbackDefaultDir)
}
return d
}
func (b *DirectoryBuilder) browse() (string, error) {
d := selectdir(b)
res := w32.SHBrowseForFolder(d.bi)
if res == 0 {
return "", ErrCancelled
}
return w32.SHGetPathFromIDList(res), nil
}

12
app/dialog/util.go Normal file
View File

@@ -0,0 +1,12 @@
//go:build windows
package dialog
func firstOf(args ...string) string {
for _, arg := range args {
if arg != "" {
return arg
}
}
return ""
}

30
app/format/field.go Normal file
View File

@@ -0,0 +1,30 @@
//go:build windows || darwin
package format
import (
"strings"
"unicode"
)
// KebabCase converts a string from camelCase or PascalCase to kebab-case.
// (e.g. "camelCase" -> "camel-case")
func KebabCase(str string) string {
var result strings.Builder
for i, char := range str {
if i > 0 {
prevChar := rune(str[i-1])
// Add hyphen before uppercase letters
if unicode.IsUpper(char) &&
(unicode.IsLower(prevChar) || unicode.IsDigit(prevChar) ||
(i < len(str)-1 && unicode.IsLower(rune(str[i+1])))) {
result.WriteRune('-')
}
}
result.WriteRune(unicode.ToLower(char))
}
return result.String()
}

34
app/format/field_test.go Normal file
View File

@@ -0,0 +1,34 @@
//go:build windows || darwin
package format
import "testing"
func TestKebabCase(t *testing.T) {
tests := []struct {
input string
expected string
}{
{"already-kebab-case", "already-kebab-case"},
{"simpleCamelCase", "simple-camel-case"},
{"PascalCase", "pascal-case"},
{"camelCaseWithNumber123", "camel-case-with-number123"},
{"APIResponse", "api-response"},
{"mixedCASE", "mixed-case"},
{"WithACRONYMS", "with-acronyms"},
{"ALLCAPS", "allcaps"},
{"camelCaseWITHMixedACRONYMS", "camel-case-with-mixed-acronyms"},
{"numbers123in456string", "numbers123in456string"},
{"5", "5"},
{"S", "s"},
}
for _, tt := range tests {
t.Run(tt.input, func(t *testing.T) {
result := KebabCase(tt.input)
if result != tt.expected {
t.Errorf("toKebabCase(%q) = %q, want %q", tt.input, result, tt.expected)
}
})
}
}

View File

@@ -1,9 +0,0 @@
//go:build !windows
package lifecycle
import "errors"
func GetStarted() error {
return errors.New("not implemented")
}

View File

@@ -1,43 +0,0 @@
package lifecycle
import (
"fmt"
"log/slog"
"os"
"os/exec"
"path/filepath"
"syscall"
)
func GetStarted() error {
const CREATE_NEW_CONSOLE = 0x00000010
var err error
bannerScript := filepath.Join(AppDir, "ollama_welcome.ps1")
args := []string{
// TODO once we're signed, the execution policy bypass should be removed
"powershell", "-noexit", "-ExecutionPolicy", "Bypass", "-nologo", "-file", bannerScript,
}
args[0], err = exec.LookPath(args[0])
if err != nil {
return err
}
// Make sure the script actually exists
_, err = os.Stat(bannerScript)
if err != nil {
return fmt.Errorf("getting started banner script error %s", err)
}
slog.Info(fmt.Sprintf("opening getting started terminal with %v", args))
attrs := &os.ProcAttr{
Files: []*os.File{os.Stdin, os.Stdout, os.Stderr},
Sys: &syscall.SysProcAttr{CreationFlags: CREATE_NEW_CONSOLE, HideWindow: false},
}
proc, err := os.StartProcess(args[0], args, attrs)
if err != nil {
return fmt.Errorf("unable to start getting started shell %w", err)
}
slog.Debug(fmt.Sprintf("getting started terminal PID: %d", proc.Pid))
return proc.Release()
}

View File

@@ -1,94 +0,0 @@
package lifecycle
import (
"context"
"fmt"
"log"
"log/slog"
"os"
"os/signal"
"syscall"
"github.com/ollama/ollama/app/store"
"github.com/ollama/ollama/app/tray"
"github.com/ollama/ollama/envconfig"
)
func Run() {
InitLogging()
slog.Info("app config", "env", envconfig.Values())
ctx, cancel := context.WithCancel(context.Background())
var done chan int
t, err := tray.NewTray()
if err != nil {
log.Fatalf("Failed to start: %s", err)
}
callbacks := t.GetCallbacks()
signals := make(chan os.Signal, 1)
signal.Notify(signals, syscall.SIGINT, syscall.SIGTERM)
go func() {
slog.Debug("starting callback loop")
for {
select {
case <-callbacks.Quit:
slog.Debug("quit called")
t.Quit()
case <-signals:
slog.Debug("shutting down due to signal")
t.Quit()
case <-callbacks.Update:
err := DoUpgrade(cancel, done)
if err != nil {
slog.Warn(fmt.Sprintf("upgrade attempt failed: %s", err))
}
case <-callbacks.ShowLogs:
ShowLogs()
case <-callbacks.DoFirstUse:
err := GetStarted()
if err != nil {
slog.Warn(fmt.Sprintf("Failed to launch getting started shell: %s", err))
}
}
}
}()
// Are we first use?
if !store.GetFirstTimeRun() {
slog.Debug("First time run")
err = t.DisplayFirstUseNotification()
if err != nil {
slog.Debug(fmt.Sprintf("XXX failed to display first use notification %v", err))
}
store.SetFirstTimeRun(true)
} else {
slog.Debug("Not first time, skipping first run notification")
}
if IsServerRunning(ctx) {
slog.Info("Detected another instance of ollama running, exiting")
os.Exit(1)
} else {
done, err = SpawnServer(ctx, CLIName)
if err != nil {
// TODO - should we retry in a backoff loop?
// TODO - should we pop up a warning and maybe add a menu item to view application logs?
slog.Error(fmt.Sprintf("Failed to spawn ollama server %s", err))
done = make(chan int, 1)
done <- 1
}
}
StartBackgroundUpdaterChecker(ctx, t.UpdateAvailable)
t.Run()
cancel()
slog.Info("Waiting for ollama server to shutdown...")
if done != nil {
<-done
}
slog.Info("Ollama app exiting")
}

View File

@@ -1,62 +0,0 @@
package lifecycle
import (
"fmt"
"log/slog"
"os"
"strconv"
"strings"
"github.com/ollama/ollama/envconfig"
"github.com/ollama/ollama/logutil"
)
func InitLogging() {
var logFile *os.File
var err error
// Detect if we're a GUI app on windows, and if not, send logs to console
if os.Stderr.Fd() != 0 {
// Console app detected
logFile = os.Stderr
// TODO - write one-line to the app.log file saying we're running in console mode to help avoid confusion
} else {
rotateLogs(AppLogFile)
logFile, err = os.OpenFile(AppLogFile, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0o755)
if err != nil {
slog.Error(fmt.Sprintf("failed to create server log %v", err))
return
}
}
slog.SetDefault(logutil.NewLogger(logFile, envconfig.LogLevel()))
slog.Info("ollama app started")
}
func rotateLogs(logFile string) {
if _, err := os.Stat(logFile); os.IsNotExist(err) {
return
}
index := strings.LastIndex(logFile, ".")
pre := logFile[:index]
post := "." + logFile[index+1:]
for i := LogRotationCount; i > 0; i-- {
older := pre + "-" + strconv.Itoa(i) + post
newer := pre + "-" + strconv.Itoa(i-1) + post
if i == 1 {
newer = pre + post
}
if _, err := os.Stat(newer); err == nil {
if _, err := os.Stat(older); err == nil {
err := os.Remove(older)
if err != nil {
slog.Warn("Failed to remove older log", "older", older, "error", err)
continue
}
}
err := os.Rename(newer, older)
if err != nil {
slog.Warn("Failed to rotate log", "older", older, "newer", newer, "error", err)
}
}
}
}

View File

@@ -1,9 +0,0 @@
//go:build !windows
package lifecycle
import "log/slog"
func ShowLogs() {
slog.Warn("not implemented")
}

View File

@@ -1,44 +0,0 @@
package lifecycle
import (
"os"
"path/filepath"
"strconv"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestRotateLogs(t *testing.T) {
logDir := t.TempDir()
logFile := filepath.Join(logDir, "testlog.log")
// No log exists
rotateLogs(logFile)
require.NoError(t, os.WriteFile(logFile, []byte("1"), 0o644))
assert.FileExists(t, logFile)
// First rotation
rotateLogs(logFile)
assert.FileExists(t, filepath.Join(logDir, "testlog-1.log"))
assert.NoFileExists(t, filepath.Join(logDir, "testlog-2.log"))
assert.NoFileExists(t, logFile)
// Should be a no-op without a new log
rotateLogs(logFile)
assert.FileExists(t, filepath.Join(logDir, "testlog-1.log"))
assert.NoFileExists(t, filepath.Join(logDir, "testlog-2.log"))
assert.NoFileExists(t, logFile)
for i := 2; i <= LogRotationCount+1; i++ {
require.NoError(t, os.WriteFile(logFile, []byte(strconv.Itoa(i)), 0o644))
assert.FileExists(t, logFile)
rotateLogs(logFile)
assert.NoFileExists(t, logFile)
for j := 1; j < i; j++ {
assert.FileExists(t, filepath.Join(logDir, "testlog-"+strconv.Itoa(j)+".log"))
}
assert.NoFileExists(t, filepath.Join(logDir, "testlog-"+strconv.Itoa(i+1)+".log"))
}
}

View File

@@ -1,19 +0,0 @@
package lifecycle
import (
"fmt"
"log/slog"
"os/exec"
"syscall"
)
func ShowLogs() {
cmd_path := "c:\\Windows\\system32\\cmd.exe"
slog.Debug(fmt.Sprintf("viewing logs with start %s", AppDataDir))
cmd := exec.Command(cmd_path, "/c", "start", AppDataDir)
cmd.SysProcAttr = &syscall.SysProcAttr{HideWindow: false, CreationFlags: 0x08000000}
err := cmd.Start()
if err != nil {
slog.Error(fmt.Sprintf("Failed to open log dir: %s", err))
}
}

View File

@@ -1,84 +0,0 @@
package lifecycle
import (
"errors"
"fmt"
"log/slog"
"os"
"path/filepath"
"runtime"
"strings"
)
var (
AppName = "ollama app"
CLIName = "ollama"
AppDir = "/opt/Ollama"
AppDataDir = "/opt/Ollama"
// TODO - should there be a distinct log dir?
UpdateStageDir = "/tmp"
AppLogFile = "/tmp/ollama_app.log"
ServerLogFile = "/tmp/ollama.log"
UpgradeLogFile = "/tmp/ollama_update.log"
Installer = "OllamaSetup.exe"
LogRotationCount = 5
)
func init() {
if runtime.GOOS == "windows" {
AppName += ".exe"
CLIName += ".exe"
// Logs, configs, downloads go to LOCALAPPDATA
localAppData := os.Getenv("LOCALAPPDATA")
AppDataDir = filepath.Join(localAppData, "Ollama")
UpdateStageDir = filepath.Join(AppDataDir, "updates")
AppLogFile = filepath.Join(AppDataDir, "app.log")
ServerLogFile = filepath.Join(AppDataDir, "server.log")
UpgradeLogFile = filepath.Join(AppDataDir, "upgrade.log")
exe, err := os.Executable()
if err != nil {
slog.Warn("error discovering executable directory", "error", err)
AppDir = filepath.Join(localAppData, "Programs", "Ollama")
} else {
AppDir = filepath.Dir(exe)
}
// Make sure we have PATH set correctly for any spawned children
paths := strings.Split(os.Getenv("PATH"), ";")
// Start with whatever we find in the PATH/LD_LIBRARY_PATH
found := false
for _, path := range paths {
d, err := filepath.Abs(path)
if err != nil {
continue
}
if strings.EqualFold(AppDir, d) {
found = true
}
}
if !found {
paths = append(paths, AppDir)
pathVal := strings.Join(paths, ";")
slog.Debug("setting PATH=" + pathVal)
err := os.Setenv("PATH", pathVal)
if err != nil {
slog.Error(fmt.Sprintf("failed to update PATH: %s", err))
}
}
// Make sure our logging dir exists
_, err = os.Stat(AppDataDir)
if errors.Is(err, os.ErrNotExist) {
if err := os.MkdirAll(AppDataDir, 0o755); err != nil {
slog.Error(fmt.Sprintf("create ollama dir %s: %v", AppDataDir, err))
}
}
} else if runtime.GOOS == "darwin" {
// TODO
AppName += ".app"
// } else if runtime.GOOS == "linux" {
// TODO
}
}

View File

@@ -1,186 +0,0 @@
package lifecycle
import (
"context"
"errors"
"fmt"
"io"
"log/slog"
"os"
"os/exec"
"path/filepath"
"time"
"github.com/ollama/ollama/api"
)
func getCLIFullPath(command string) string {
var cmdPath string
appExe, err := os.Executable()
if err == nil {
// Check both the same location as the tray app, as well as ./bin
cmdPath = filepath.Join(filepath.Dir(appExe), command)
_, err := os.Stat(cmdPath)
if err == nil {
return cmdPath
}
cmdPath = filepath.Join(filepath.Dir(appExe), "bin", command)
_, err = os.Stat(cmdPath)
if err == nil {
return cmdPath
}
}
cmdPath, err = exec.LookPath(command)
if err == nil {
_, err := os.Stat(cmdPath)
if err == nil {
return cmdPath
}
}
pwd, err := os.Getwd()
if err == nil {
cmdPath = filepath.Join(pwd, command)
_, err = os.Stat(cmdPath)
if err == nil {
return cmdPath
}
}
return command
}
func start(ctx context.Context, command string) (*exec.Cmd, error) {
cmd := getCmd(ctx, getCLIFullPath(command))
stdout, err := cmd.StdoutPipe()
if err != nil {
return nil, fmt.Errorf("failed to spawn server stdout pipe: %w", err)
}
stderr, err := cmd.StderrPipe()
if err != nil {
return nil, fmt.Errorf("failed to spawn server stderr pipe: %w", err)
}
rotateLogs(ServerLogFile)
logFile, err := os.OpenFile(ServerLogFile, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0o755)
if err != nil {
return nil, fmt.Errorf("failed to create server log: %w", err)
}
logDir := filepath.Dir(ServerLogFile)
_, err = os.Stat(logDir)
if err != nil {
if !errors.Is(err, os.ErrNotExist) {
return nil, fmt.Errorf("stat ollama server log dir %s: %v", logDir, err)
}
if err := os.MkdirAll(logDir, 0o755); err != nil {
return nil, fmt.Errorf("create ollama server log dir %s: %v", logDir, err)
}
}
go func() {
defer logFile.Close()
io.Copy(logFile, stdout) //nolint:errcheck
}()
go func() {
defer logFile.Close()
io.Copy(logFile, stderr) //nolint:errcheck
}()
// Re-wire context done behavior to attempt a graceful shutdown of the server
cmd.Cancel = func() error {
if cmd.Process != nil {
err := terminate(cmd)
if err != nil {
slog.Warn("error trying to gracefully terminate server", "err", err)
return cmd.Process.Kill()
}
tick := time.NewTicker(10 * time.Millisecond)
defer tick.Stop()
for {
select {
case <-tick.C:
exited, err := isProcessExited(cmd.Process.Pid)
if err != nil {
return err
}
if exited {
return nil
}
case <-time.After(5 * time.Second):
slog.Warn("graceful server shutdown timeout, killing", "pid", cmd.Process.Pid)
return cmd.Process.Kill()
}
}
}
return nil
}
// run the command and wait for it to finish
if err := cmd.Start(); err != nil {
return nil, fmt.Errorf("failed to start server %w", err)
}
if cmd.Process != nil {
slog.Info(fmt.Sprintf("started ollama server with pid %d", cmd.Process.Pid))
}
slog.Info(fmt.Sprintf("ollama server logs %s", ServerLogFile))
return cmd, nil
}
func SpawnServer(ctx context.Context, command string) (chan int, error) {
done := make(chan int)
go func() {
// Keep the server running unless we're shuttind down the app
crashCount := 0
for {
slog.Info("starting server...")
cmd, err := start(ctx, command)
if err != nil {
crashCount++
slog.Error(fmt.Sprintf("failed to start server %s", err))
time.Sleep(500 * time.Millisecond * time.Duration(crashCount))
continue
}
cmd.Wait() //nolint:errcheck
var code int
if cmd.ProcessState != nil {
code = cmd.ProcessState.ExitCode()
}
select {
case <-ctx.Done():
slog.Info(fmt.Sprintf("server shutdown with exit code %d", code))
done <- code
return
default:
crashCount++
slog.Warn(fmt.Sprintf("server crash %d - exit code %d - respawning", crashCount, code))
time.Sleep(500 * time.Millisecond * time.Duration(crashCount))
break
}
}
}()
return done, nil
}
func IsServerRunning(ctx context.Context) bool {
client, err := api.ClientFromEnvironment()
if err != nil {
slog.Info("unable to connect to server")
return false
}
err = client.Heartbeat(ctx)
if err != nil {
slog.Debug(fmt.Sprintf("heartbeat from server: %s", err))
slog.Info("unable to connect to server")
return false
}
return true
}

View File

@@ -1,38 +0,0 @@
//go:build !windows
package lifecycle
import (
"context"
"errors"
"fmt"
"os"
"os/exec"
"syscall"
)
func getCmd(ctx context.Context, cmd string) *exec.Cmd {
return exec.CommandContext(ctx, cmd, "serve")
}
func terminate(cmd *exec.Cmd) error {
return cmd.Process.Signal(os.Interrupt)
}
func isProcessExited(pid int) (bool, error) {
proc, err := os.FindProcess(pid)
if err != nil {
return false, fmt.Errorf("failed to find process: %v", err)
}
err = proc.Signal(syscall.Signal(0))
if err != nil {
if errors.Is(err, os.ErrProcessDone) || errors.Is(err, syscall.ESRCH) {
return true, nil
}
return false, fmt.Errorf("error signaling process: %v", err)
}
return false, nil
}

View File

@@ -1,91 +0,0 @@
package lifecycle
import (
"context"
"fmt"
"os/exec"
"syscall"
"golang.org/x/sys/windows"
)
func getCmd(ctx context.Context, exePath string) *exec.Cmd {
cmd := exec.CommandContext(ctx, exePath, "serve")
cmd.SysProcAttr = &syscall.SysProcAttr{
HideWindow: true,
CreationFlags: windows.CREATE_NEW_PROCESS_GROUP,
}
return cmd
}
func terminate(cmd *exec.Cmd) error {
dll, err := windows.LoadDLL("kernel32.dll")
if err != nil {
return err
}
//nolint:errcheck
defer dll.Release()
pid := cmd.Process.Pid
f, err := dll.FindProc("AttachConsole")
if err != nil {
return err
}
r1, _, err := f.Call(uintptr(pid))
if r1 == 0 && err != syscall.ERROR_ACCESS_DENIED {
return err
}
f, err = dll.FindProc("SetConsoleCtrlHandler")
if err != nil {
return err
}
r1, _, err = f.Call(0, 1)
if r1 == 0 {
return err
}
f, err = dll.FindProc("GenerateConsoleCtrlEvent")
if err != nil {
return err
}
r1, _, err = f.Call(windows.CTRL_BREAK_EVENT, uintptr(pid))
if r1 == 0 {
return err
}
r1, _, err = f.Call(windows.CTRL_C_EVENT, uintptr(pid))
if r1 == 0 {
return err
}
return nil
}
const STILL_ACTIVE = 259
func isProcessExited(pid int) (bool, error) {
hProcess, err := windows.OpenProcess(windows.PROCESS_QUERY_INFORMATION, false, uint32(pid))
if err != nil {
return false, fmt.Errorf("failed to open process: %v", err)
}
//nolint:errcheck
defer windows.CloseHandle(hProcess)
var exitCode uint32
err = windows.GetExitCodeProcess(hProcess, &exitCode)
if err != nil {
return false, fmt.Errorf("failed to get exit code: %v", err)
}
if exitCode == STILL_ACTIVE {
return false, nil
}
return true, nil
}

View File

@@ -1,12 +0,0 @@
//go:build !windows
package lifecycle
import (
"context"
"errors"
)
func DoUpgrade(cancel context.CancelFunc, done chan int) error {
return errors.New("not implemented")
}

View File

@@ -1,74 +0,0 @@
package lifecycle
import (
"context"
"errors"
"fmt"
"log/slog"
"os"
"os/exec"
"path/filepath"
)
func DoUpgrade(cancel context.CancelFunc, done chan int) error {
files, err := filepath.Glob(filepath.Join(UpdateStageDir, "*", "*.exe")) // TODO generalize for multiplatform
if err != nil {
return fmt.Errorf("failed to lookup downloads: %s", err)
}
if len(files) == 0 {
return errors.New("no update downloads found")
} else if len(files) > 1 {
// Shouldn't happen
slog.Warn(fmt.Sprintf("multiple downloads found, using first one %v", files))
}
installerExe := files[0]
slog.Info("starting upgrade with " + installerExe)
slog.Info("upgrade log file " + UpgradeLogFile)
// make the upgrade show progress, but non interactive
installArgs := []string{
"/CLOSEAPPLICATIONS", // Quit the tray app if it's still running
"/LOG=" + filepath.Base(UpgradeLogFile), // Only relative seems reliable, so set pwd
"/FORCECLOSEAPPLICATIONS", // Force close the tray app - might be needed
"/SP", // Skip the "This will install... Do you wish to continue" prompt
"/NOCANCEL", // Disable the ability to cancel upgrade mid-flight to avoid partially installed upgrades
"/SILENT",
}
// Safeguard in case we have requests in flight that need to drain...
slog.Info("Waiting for server to shutdown")
cancel()
if done != nil {
<-done
} else {
// Shouldn't happen
slog.Warn("done chan was nil, not actually waiting")
}
slog.Debug(fmt.Sprintf("starting installer: %s %v", installerExe, installArgs))
os.Chdir(filepath.Dir(UpgradeLogFile)) //nolint:errcheck
cmd := exec.Command(installerExe, installArgs...)
if err := cmd.Start(); err != nil {
return fmt.Errorf("unable to start ollama app %w", err)
}
if cmd.Process != nil {
err = cmd.Process.Release()
if err != nil {
slog.Error(fmt.Sprintf("failed to release server process: %s", err))
}
} else {
// TODO - some details about why it didn't start, or is this a pedantic error case?
return errors.New("installer process did not start")
}
// TODO should we linger for a moment and check to make sure it's actually running by checking the pid?
slog.Info("Installer started in background, exiting")
os.Exit(0)
// Not reached
return nil
}

View File

@@ -0,0 +1,45 @@
//go:build windows || darwin
// package logrotate provides utilities for rotating logs
// TODO (jmorgan): this most likely doesn't need it's own
// package and can be moved to app where log files are created
package logrotate
import (
"log/slog"
"os"
"strconv"
"strings"
)
const MaxLogFiles = 5
func Rotate(filename string) {
if _, err := os.Stat(filename); os.IsNotExist(err) {
return
}
index := strings.LastIndex(filename, ".")
pre := filename[:index]
post := "." + filename[index+1:]
for i := MaxLogFiles; i > 0; i-- {
older := pre + "-" + strconv.Itoa(i) + post
newer := pre + "-" + strconv.Itoa(i-1) + post
if i == 1 {
newer = pre + post
}
if _, err := os.Stat(newer); err == nil {
if _, err := os.Stat(older); err == nil {
err := os.Remove(older)
if err != nil {
slog.Warn("Failed to remove older log", "older", older, "error", err)
continue
}
}
err := os.Rename(newer, older)
if err != nil {
slog.Warn("Failed to rotate log", "older", older, "newer", newer, "error", err)
}
}
}
}

View File

@@ -0,0 +1,70 @@
//go:build windows || darwin
package logrotate
import (
"os"
"path/filepath"
"strconv"
"testing"
)
func TestRotate(t *testing.T) {
logDir := t.TempDir()
logFile := filepath.Join(logDir, "testlog.log")
// No log exists
Rotate(logFile)
if err := os.WriteFile(logFile, []byte("1"), 0o644); err != nil {
t.Fatal(err)
}
if _, err := os.Stat(logFile); os.IsNotExist(err) {
t.Fatal("expected log file to exist")
}
// First rotation
Rotate(logFile)
if _, err := os.Stat(filepath.Join(logDir, "testlog-1.log")); os.IsNotExist(err) {
t.Fatal("expected rotated log file to exist")
}
if _, err := os.Stat(filepath.Join(logDir, "testlog-2.log")); !os.IsNotExist(err) {
t.Fatal("expected no second rotated log file")
}
if _, err := os.Stat(logFile); !os.IsNotExist(err) {
t.Fatal("expected original log file to be moved")
}
// Should be a no-op without a new log
Rotate(logFile)
if _, err := os.Stat(filepath.Join(logDir, "testlog-1.log")); os.IsNotExist(err) {
t.Fatal("expected rotated log file to still exist")
}
if _, err := os.Stat(filepath.Join(logDir, "testlog-2.log")); !os.IsNotExist(err) {
t.Fatal("expected no second rotated log file")
}
if _, err := os.Stat(logFile); !os.IsNotExist(err) {
t.Fatal("expected no original log file")
}
for i := 2; i <= MaxLogFiles+1; i++ {
if err := os.WriteFile(logFile, []byte(strconv.Itoa(i)), 0o644); err != nil {
t.Fatal(err)
}
if _, err := os.Stat(logFile); os.IsNotExist(err) {
t.Fatal("expected log file to exist")
}
Rotate(logFile)
if _, err := os.Stat(logFile); !os.IsNotExist(err) {
t.Fatal("expected log file to be moved")
}
for j := 1; j < i; j++ {
if _, err := os.Stat(filepath.Join(logDir, "testlog-"+strconv.Itoa(j)+".log")); os.IsNotExist(err) {
t.Fatalf("expected rotated log file %d to exist", j)
}
}
if _, err := os.Stat(filepath.Join(logDir, "testlog-"+strconv.Itoa(i+1)+".log")); !os.IsNotExist(err) {
t.Fatalf("expected no rotated log file %d", i+1)
}
}
}

View File

@@ -1,12 +0,0 @@
package main
// Compile with the following to get rid of the cmd pop up on windows
// go build -ldflags="-H windowsgui" .
import (
"github.com/ollama/ollama/app/lifecycle"
)
func main() {
lifecycle.Run()
}

View File

@@ -37,8 +37,10 @@ PrivilegesRequired=lowest
OutputBaseFilename="OllamaSetup"
SetupIconFile={#MyIcon}
UninstallDisplayIcon={uninstallexe}
Compression=lzma2
SolidCompression=no
Compression=lzma2/ultra64
LZMAUseSeparateProcess=yes
LZMANumBlockThreads=8
SolidCompression=yes
WizardStyle=modern
ChangesEnvironment=yes
OutputDir=..\dist\
@@ -46,7 +48,7 @@ OutputDir=..\dist\
; Disable logging once everything's battle tested
; Filename will be %TEMP%\Setup Log*.txt
SetupLogging=yes
CloseApplications=yes
CloseApplications=no
RestartApplications=no
RestartIfNeededByRun=no
@@ -68,7 +70,6 @@ DisableFinishedPage=yes
DisableReadyMemo=yes
DisableReadyPage=yes
DisableStartupPrompt=yes
DisableWelcomePage=yes
; TODO - percentage can't be set less than 100, so how to make it shorter?
; WizardSizePercent=100,80
@@ -87,30 +88,42 @@ Name: "english"; MessagesFile: "compiler:Default.isl"
DialogFontSize=12
[Files]
#if DirExists("..\dist\windows-amd64")
Source: "..\dist\windows-amd64-app.exe"; DestDir: "{app}"; DestName: "{#MyAppExeName}" ;Check: not IsArm64(); Flags: ignoreversion 64bit
Source: "..\dist\windows-amd64\ollama.exe"; DestDir: "{app}"; Check: not IsArm64(); Flags: ignoreversion 64bit
#if FileExists("..\dist\windows-ollama-app-amd64.exe")
Source: "..\dist\windows-ollama-app-amd64.exe"; DestDir: "{app}"; DestName: "{#MyAppExeName}" ;Check: not IsArm64(); Flags: ignoreversion 64bit; BeforeInstall: TaskKill('{#MyAppExeName}')
Source: "..\dist\windows-amd64\vc_redist.x64.exe"; DestDir: "{tmp}"; Check: not IsArm64() and vc_redist_needed(); Flags: deleteafterinstall
Source: "..\dist\windows-amd64\ollama.exe"; DestDir: "{app}"; Check: not IsArm64(); Flags: ignoreversion 64bit; BeforeInstall: TaskKill('ollama.exe')
Source: "..\dist\windows-amd64\lib\ollama\*"; DestDir: "{app}\lib\ollama\"; Check: not IsArm64(); Flags: ignoreversion 64bit recursesubdirs
#endif
#if DirExists("..\dist\windows-arm64")
Source: "..\dist\windows-arm64\vc_redist.arm64.exe"; DestDir: "{tmp}"; Check: IsArm64() and vc_redist_needed(); Flags: deleteafterinstall
Source: "..\dist\windows-arm64-app.exe"; DestDir: "{app}"; DestName: "{#MyAppExeName}" ;Check: IsArm64(); Flags: ignoreversion 64bit
Source: "..\dist\windows-arm64\ollama.exe"; DestDir: "{app}"; Check: IsArm64(); Flags: ignoreversion 64bit
; For local development, rely on binary compatibility at runtime since we can't cross compile
#if FileExists("..\dist\windows-ollama-app-arm64.exe")
Source: "..\dist\windows-ollama-app-arm64.exe"; DestDir: "{app}"; DestName: "{#MyAppExeName}" ;Check: IsArm64(); Flags: ignoreversion 64bit; BeforeInstall: TaskKill('{#MyAppExeName}')
#else
Source: "..\dist\windows-ollama-app-amd64.exe"; DestDir: "{app}"; DestName: "{#MyAppExeName}" ;Check: IsArm64(); Flags: ignoreversion 64bit; BeforeInstall: TaskKill('{#MyAppExeName}')
#endif
#if FileExists("..\dist\windows-arm64\ollama.exe")
Source: "..\dist\windows-arm64\vc_redist.arm64.exe"; DestDir: "{tmp}"; Check: IsArm64() and vc_redist_needed(); Flags: deleteafterinstall
Source: "..\dist\windows-arm64\ollama.exe"; DestDir: "{app}"; Check: IsArm64(); Flags: ignoreversion 64bit; BeforeInstall: TaskKill('ollama.exe')
#endif
Source: "..\dist\ollama_welcome.ps1"; DestDir: "{app}"; Flags: ignoreversion
Source: ".\assets\app.ico"; DestDir: "{app}"; Flags: ignoreversion
[Icons]
Name: "{group}\{#MyAppName}"; Filename: "{app}\{#MyAppExeName}"; IconFilename: "{app}\app.ico"
Name: "{userstartup}\{#MyAppName}"; Filename: "{app}\{#MyAppExeName}"; IconFilename: "{app}\app.ico"
Name: "{app}\lib\{#MyAppName}"; Filename: "{app}\{#MyAppExeName}"; IconFilename: "{app}\app.ico"
Name: "{userprograms}\{#MyAppName}"; Filename: "{app}\{#MyAppExeName}"; IconFilename: "{app}\app.ico"
[InstallDelete]
Type: files; Name: "{%LOCALAPPDATA}\Ollama\updates"
[Run]
#if DirExists("..\dist\windows-arm64")
Filename: "{tmp}\vc_redist.arm64.exe"; Parameters: "/install /passive /norestart"; Check: IsArm64() and vc_redist_needed(); StatusMsg: "Installing VC++ Redistributables..."; Flags: waituntilterminated
#endif
#if DirExists("..\dist\windows-amd64")
Filename: "{tmp}\vc_redist.x64.exe"; Parameters: "/install /passive /norestart"; Check: not IsArm64() and vc_redist_needed(); StatusMsg: "Installing VC++ Redistributables..."; Flags: waituntilterminated
#endif
Filename: "{cmd}"; Parameters: "/C set PATH={app};%PATH% & ""{app}\{#MyAppExeName}"""; Flags: postinstall nowait runhidden
[UninstallRun]
@@ -126,13 +139,13 @@ Filename: "{cmd}"; Parameters: "/c timeout 5"; Flags: runhidden
Type: filesandordirs; Name: "{%TEMP}\ollama*"
Type: filesandordirs; Name: "{%LOCALAPPDATA}\Ollama"
Type: filesandordirs; Name: "{%LOCALAPPDATA}\Programs\Ollama"
Type: filesandordirs; Name: "{%USERPROFILE}\.ollama\models"
Type: filesandordirs; Name: "{%USERPROFILE}\.ollama\history"
Type: filesandordirs; Name: "{userstartup}\{#MyAppName}.lnk"
; NOTE: if the user has a custom OLLAMA_MODELS it will be preserved
[InstallDelete]
Type: filesandordirs; Name: "{%TEMP}\ollama*"
Type: filesandordirs; Name: "{%LOCALAPPDATA}\Programs\Ollama"
Type: filesandordirs; Name: "{app}\lib\ollama"
[Messages]
WizardReady=Ollama
@@ -148,6 +161,10 @@ SetupAppRunningError=Another Ollama installer is running.%n%nPlease cancel or fi
Root: HKCU; Subkey: "Environment"; \
ValueType: expandsz; ValueName: "Path"; ValueData: "{olddata};{app}"; \
Check: NeedsAddPath('{app}')
; Register ollama:// URL protocol
Root: HKCU; Subkey: "Software\Classes\ollama"; ValueType: string; ValueName: ""; ValueData: "URL:Ollama Protocol"; Flags: uninsdeletekey
Root: HKCU; Subkey: "Software\Classes\ollama"; ValueType: string; ValueName: "URL Protocol"; ValueData: ""; Flags: uninsdeletekey
Root: HKCU; Subkey: "Software\Classes\ollama\shell\open\command"; ValueType: string; ValueName: ""; ValueData: """{app}\{#MyAppExeName}"" ""%1"""; Flags: uninsdeletekey
[Code]
@@ -182,7 +199,11 @@ var
v3: Cardinal;
v4: Cardinal;
begin
sRegKey := 'SOFTWARE\WOW6432Node\Microsoft\VisualStudio\14.0\VC\Runtimes\arm64';
if (IsArm64()) then begin
sRegKey := 'SOFTWARE\WOW6432Node\Microsoft\VisualStudio\14.0\VC\Runtimes\arm64';
end else begin
sRegKey := 'SOFTWARE\Microsoft\VisualStudio\14.0\VC\Runtimes\x64';
end;
if (RegQueryDWordValue (HKEY_LOCAL_MACHINE, sRegKey, 'Major', v1) and
RegQueryDWordValue (HKEY_LOCAL_MACHINE, sRegKey, 'Minor', v2) and
RegQueryDWordValue (HKEY_LOCAL_MACHINE, sRegKey, 'Bld', v3) and
@@ -202,3 +223,152 @@ begin
else
Result := TRUE;
end;
function GetDirSize(Path: String): Int64;
var
FindRec: TFindRec;
FilePath: string;
Size: Int64;
begin
if FindFirst(Path + '\*', FindRec) then begin
Result := 0;
try
repeat
if (FindRec.Name <> '.') and (FindRec.Name <> '..') then begin
FilePath := Path + '\' + FindRec.Name;
if (FindRec.Attributes and FILE_ATTRIBUTE_DIRECTORY) <> 0 then begin
Size := GetDirSize(FilePath);
end else begin
Size := Int64(FindRec.SizeHigh) shl 32 + FindRec.SizeLow;
end;
Result := Result + Size;
end;
until not FindNext(FindRec);
finally
FindClose(FindRec);
end;
end else begin
Log(Format('Failed to list %s', [Path]));
Result := -1;
end;
end;
var
DeleteModelsChecked: Boolean;
ModelsDir: string;
procedure InitializeUninstallProgressForm();
var
UninstallPage: TNewNotebookPage;
UninstallButton: TNewButton;
DeleteModelsCheckbox: TNewCheckBox;
OriginalPageNameLabel: string;
OriginalPageDescriptionLabel: string;
OriginalCancelButtonEnabled: Boolean;
OriginalCancelButtonModalResult: Integer;
ctrl: TWinControl;
ModelDirA: AnsiString;
ModelsSize: Int64;
begin
if not UninstallSilent then begin
ctrl := UninstallProgressForm.CancelButton;
UninstallButton := TNewButton.Create(UninstallProgressForm);
UninstallButton.Parent := UninstallProgressForm;
UninstallButton.Left := ctrl.Left - ctrl.Width - ScaleX(10);
UninstallButton.Top := ctrl.Top;
UninstallButton.Width := ctrl.Width;
UninstallButton.Height := ctrl.Height;
UninstallButton.TabOrder := ctrl.TabOrder;
UninstallButton.Caption := 'Uninstall';
UninstallButton.ModalResult := mrOK;
UninstallProgressForm.CancelButton.TabOrder := UninstallButton.TabOrder + 1;
UninstallPage := TNewNotebookPage.Create(UninstallProgressForm);
UninstallPage.Notebook := UninstallProgressForm.InnerNotebook;
UninstallPage.Parent := UninstallProgressForm.InnerNotebook;
UninstallPage.Align := alClient;
UninstallProgressForm.InnerNotebook.ActivePage := UninstallPage;
ctrl := UninstallProgressForm.StatusLabel;
with TNewStaticText.Create(UninstallProgressForm) do begin
Parent := UninstallPage;
Top := ctrl.Top;
Left := ctrl.Left;
Width := ctrl.Width;
Height := ctrl.Height;
AutoSize := False;
ShowAccelChar := False;
Caption := '';
end;
if (DirExists(GetEnv('USERPROFILE') + '\.ollama\models\blobs')) then begin
ModelsDir := GetEnv('USERPROFILE') + '\.ollama\models';
ModelsSize := GetDirSize(ModelsDir);
end;
DeleteModelsCheckbox := TNewCheckBox.Create(UninstallProgressForm);
DeleteModelsCheckbox.Parent := UninstallPage;
DeleteModelsCheckbox.Top := ctrl.Top + ScaleY(30);
DeleteModelsCheckbox.Left := ctrl.Left;
DeleteModelsCheckbox.Width := ScaleX(300);
if ModelsSize > 1024*1024*1024 then begin
DeleteModelsCheckbox.Caption := 'Remove models (' + IntToStr(ModelsSize/(1024*1024*1024)) + ' GB) ' + ModelsDir;
end else if ModelsSize > 1024*1024 then begin
DeleteModelsCheckbox.Caption := 'Remove models (' + IntToStr(ModelsSize/(1024*1024)) + ' MB) ' + ModelsDir;
end else begin
DeleteModelsCheckbox.Caption := 'Remove models ' + ModelsDir;
end;
DeleteModelsCheckbox.Checked := True;
OriginalPageNameLabel := UninstallProgressForm.PageNameLabel.Caption;
OriginalPageDescriptionLabel := UninstallProgressForm.PageDescriptionLabel.Caption;
OriginalCancelButtonEnabled := UninstallProgressForm.CancelButton.Enabled;
OriginalCancelButtonModalResult := UninstallProgressForm.CancelButton.ModalResult;
UninstallProgressForm.PageNameLabel.Caption := '';
UninstallProgressForm.PageDescriptionLabel.Caption := '';
UninstallProgressForm.CancelButton.Enabled := True;
UninstallProgressForm.CancelButton.ModalResult := mrCancel;
if UninstallProgressForm.ShowModal = mrCancel then Abort;
UninstallButton.Visible := False;
UninstallProgressForm.PageNameLabel.Caption := OriginalPageNameLabel;
UninstallProgressForm.PageDescriptionLabel.Caption := OriginalPageDescriptionLabel;
UninstallProgressForm.CancelButton.Enabled := OriginalCancelButtonEnabled;
UninstallProgressForm.CancelButton.ModalResult := OriginalCancelButtonModalResult;
UninstallProgressForm.InnerNotebook.ActivePage := UninstallProgressForm.InstallingPage;
if DeleteModelsCheckbox.Checked then begin
DeleteModelsChecked:=True;
end else begin
DeleteModelsChecked:=False;
end;
end;
end;
procedure CurUninstallStepChanged(CurUninstallStep: TUninstallStep);
begin
if CurUninstallStep = usDone then begin
if DeleteModelsChecked then begin
Log('user requested model cleanup');
if (VarIsEmpty(ModelsDir)) then begin
Log('cleaning up home directory models')
DelTree(GetEnv('USERPROFILE') + '\.ollama\models', True, True, True);
end else begin
Log('cleaning up custom directory models ' + ModelsDir)
DelTree(ModelsDir + '\blobs', True, True, True);
DelTree(ModelsDir + '\manifests', True, True, True);
end;
end else begin
Log('user requested to preserve model dir');
end;
end;
end;
procedure TaskKill(FileName: String);
var
ResultCode: Integer;
begin
Exec('taskkill.exe', '/f /im ' + '"' + FileName + '"', '', SW_HIDE, ewWaitUntilTerminated, ResultCode);
end;

View File

@@ -1,8 +0,0 @@
# TODO - consider ANSI colors and maybe ASCII art...
write-host ""
write-host "Welcome to Ollama!"
write-host ""
write-host "Run your first model:"
write-host ""
write-host "`tollama run llama3.2"
write-host ""

357
app/server/server.go Normal file
View File

@@ -0,0 +1,357 @@
//go:build windows || darwin
package server
import (
"bufio"
"context"
"errors"
"fmt"
"io"
"log/slog"
"os"
"os/exec"
"path/filepath"
"regexp"
"runtime"
"strconv"
"strings"
"time"
"github.com/ollama/ollama/app/logrotate"
"github.com/ollama/ollama/app/store"
)
const restartDelay = time.Second
// Server is a managed ollama server process
type Server struct {
store *store.Store
bin string // resolved path to `ollama`
log io.WriteCloser
dev bool // true if running with the dev flag
}
type InferenceCompute struct {
Library string
Variant string
Compute string
Driver string
Name string
VRAM string
}
func New(s *store.Store, devMode bool) *Server {
p := resolvePath("ollama")
return &Server{store: s, bin: p, dev: devMode}
}
func resolvePath(name string) string {
// look in the app bundle first
if exe, _ := os.Executable(); exe != "" {
var dir string
if runtime.GOOS == "windows" {
dir = filepath.Dir(exe)
} else {
dir = filepath.Join(filepath.Dir(exe), "..", "Resources")
}
if _, err := os.Stat(filepath.Join(dir, name)); err == nil {
return filepath.Join(dir, name)
}
}
// check the development dist path
for _, path := range []string{
filepath.Join("dist", runtime.GOOS, name),
filepath.Join("dist", runtime.GOOS+"-"+runtime.GOARCH, name),
} {
if _, err := os.Stat(path); err == nil {
return path
}
}
// fallback to system path
if p, _ := exec.LookPath(name); p != "" {
return p
}
return name
}
// cleanup checks the pid file for a running ollama process
// and shuts it down gracefully if it is running
func cleanup() error {
data, err := os.ReadFile(pidFile)
if err != nil {
if os.IsNotExist(err) {
return nil
}
return err
}
defer os.Remove(pidFile)
pid, err := strconv.Atoi(strings.TrimSpace(string(data)))
if err != nil {
return err
}
proc, err := os.FindProcess(pid)
if err != nil {
return nil
}
ok, err := terminated(pid)
if err != nil {
slog.Debug("cleanup: error checking if terminated", "pid", pid, "err", err)
}
if ok {
return nil
}
slog.Info("detected previous ollama process, cleaning up", "pid", pid)
return stop(proc)
}
// stop waits for a process with the provided pid to exit by polling
// `terminated(pid)`. If the process has not exited within 5 seconds, it logs a
// warning and kills the process.
func stop(proc *os.Process) error {
if proc == nil {
return nil
}
if err := terminate(proc); err != nil {
slog.Warn("graceful terminate failed, killing", "err", err)
return proc.Kill()
}
deadline := time.NewTimer(5 * time.Second)
defer deadline.Stop()
for {
select {
case <-deadline.C:
slog.Warn("timeout waiting for graceful shutdown; killing", "pid", proc.Pid)
return proc.Kill()
default:
ok, err := terminated(proc.Pid)
if err != nil {
slog.Error("error checking if ollama process is terminated", "err", err)
return err
}
if ok {
return nil
}
time.Sleep(10 * time.Millisecond)
}
}
}
func (s *Server) Run(ctx context.Context) error {
l, err := openRotatingLog()
if err != nil {
return err
}
s.log = l
defer s.log.Close()
if err := cleanup(); err != nil {
slog.Warn("failed to cleanup previous ollama process", "err", err)
}
reaped := false
for ctx.Err() == nil {
select {
case <-ctx.Done():
return ctx.Err()
case <-time.After(restartDelay):
}
cmd, err := s.cmd(ctx)
if err != nil {
return err
}
if err := cmd.Start(); err != nil {
return err
}
err = os.WriteFile(pidFile, []byte(strconv.Itoa(cmd.Process.Pid)), 0o644)
if err != nil {
slog.Warn("failed to write pid file", "file", pidFile, "err", err)
}
if err = cmd.Wait(); err != nil && !errors.Is(err, context.Canceled) {
var exitErr *exec.ExitError
if errors.As(err, &exitErr) && exitErr.ExitCode() == 1 && !s.dev && !reaped {
reaped = true
// This could be a port conflict, try to kill any existing ollama processes
if err := reapServers(); err != nil {
slog.Warn("failed to stop existing ollama server", "err", err)
} else {
slog.Debug("conflicting server stopped, waiting for port to be released")
continue
}
}
slog.Error("ollama exited", "err", err)
}
}
return ctx.Err()
}
func (s *Server) cmd(ctx context.Context) (*exec.Cmd, error) {
settings, err := s.store.Settings()
if err != nil {
return nil, err
}
cmd := commandContext(ctx, s.bin, "serve")
cmd.Stdout, cmd.Stderr = s.log, s.log
// Copy and mutate the environment to merge in settings the user has specified without dups
env := map[string]string{}
for _, kv := range os.Environ() {
s := strings.SplitN(kv, "=", 2)
env[s[0]] = s[1]
}
if settings.Expose {
env["OLLAMA_HOST"] = "0.0.0.0"
}
if settings.Browser {
env["OLLAMA_ORIGINS"] = "*"
}
if settings.Models != "" {
if _, err := os.Stat(settings.Models); err == nil {
env["OLLAMA_MODELS"] = settings.Models
} else {
slog.Warn("models path not accessible, clearing models setting", "path", settings.Models, "err", err)
settings.Models = ""
s.store.SetSettings(settings)
}
}
if settings.ContextLength > 0 {
env["OLLAMA_CONTEXT_LENGTH"] = strconv.Itoa(settings.ContextLength)
}
cmd.Env = []string{}
for k, v := range env {
cmd.Env = append(cmd.Env, k+"="+v)
}
cmd.Cancel = func() error {
if cmd.Process == nil {
return nil
}
return stop(cmd.Process)
}
return cmd, nil
}
func openRotatingLog() (io.WriteCloser, error) {
// TODO consider rotation based on size or time, not just every server invocation
dir := filepath.Dir(serverLogPath)
if err := os.MkdirAll(dir, 0o755); err != nil {
return nil, fmt.Errorf("create log directory: %w", err)
}
logrotate.Rotate(serverLogPath)
f, err := os.OpenFile(serverLogPath, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0o644)
if err != nil {
return nil, fmt.Errorf("open log file: %w", err)
}
return f, nil
}
// Attempt to retrieve inference compute information from the server
// log. Set ctx to timeout to control how long to wait for the logs to appear
func GetInferenceComputer(ctx context.Context) ([]InferenceCompute, error) {
inference := []InferenceCompute{}
marker := regexp.MustCompile(`inference compute.*library=`)
q := `inference compute.*%s=["]([^"]*)["]`
nq := `inference compute.*%s=(\S+)\s`
type regex struct {
q *regexp.Regexp
nq *regexp.Regexp
}
regexes := map[string]regex{
"library": {
q: regexp.MustCompile(fmt.Sprintf(q, "library")),
nq: regexp.MustCompile(fmt.Sprintf(nq, "library")),
},
"variant": {
q: regexp.MustCompile(fmt.Sprintf(q, "variant")),
nq: regexp.MustCompile(fmt.Sprintf(nq, "variant")),
},
"compute": {
q: regexp.MustCompile(fmt.Sprintf(q, "compute")),
nq: regexp.MustCompile(fmt.Sprintf(nq, "compute")),
},
"driver": {
q: regexp.MustCompile(fmt.Sprintf(q, "driver")),
nq: regexp.MustCompile(fmt.Sprintf(nq, "driver")),
},
"name": {
q: regexp.MustCompile(fmt.Sprintf(q, "name")),
nq: regexp.MustCompile(fmt.Sprintf(nq, "name")),
},
"total": {
q: regexp.MustCompile(fmt.Sprintf(q, "total")),
nq: regexp.MustCompile(fmt.Sprintf(nq, "total")),
},
}
get := func(field, line string) string {
regex, ok := regexes[field]
if !ok {
slog.Warn("missing field", "field", field)
return ""
}
match := regex.q.FindStringSubmatch(line)
if len(match) > 1 {
return match[1]
}
match = regex.nq.FindStringSubmatch(line)
if len(match) > 1 {
return match[1]
}
return ""
}
for {
select {
case <-ctx.Done():
return nil, fmt.Errorf("timeout scanning server log for inference compute details")
default:
}
file, err := os.Open(serverLogPath)
if err != nil {
slog.Debug("failed to open server log", "log", serverLogPath, "error", err)
time.Sleep(time.Second)
continue
}
defer file.Close()
scanner := bufio.NewScanner(file)
for scanner.Scan() {
line := scanner.Text()
match := marker.FindStringSubmatch(line)
if len(match) > 0 {
ic := InferenceCompute{
Library: get("library", line),
Variant: get("variant", line),
Compute: get("compute", line),
Driver: get("driver", line),
Name: get("name", line),
VRAM: get("total", line),
}
slog.Info("Matched", "inference compute", ic)
inference = append(inference, ic)
} else {
// Break out on first non matching line after we start matching
if len(inference) > 0 {
return inference, nil
}
}
}
time.Sleep(100 * time.Millisecond)
}
}

249
app/server/server_test.go Normal file
View File

@@ -0,0 +1,249 @@
//go:build windows || darwin
package server
import (
"context"
"os"
"path/filepath"
"reflect"
"strings"
"testing"
"time"
"github.com/ollama/ollama/app/store"
)
func TestNew(t *testing.T) {
tmpDir := t.TempDir()
st := &store.Store{DBPath: filepath.Join(tmpDir, "db.sqlite")}
defer st.Close() // Ensure database is closed before cleanup
s := New(st, false)
if s == nil {
t.Fatal("expected non-nil server")
}
if s.bin == "" {
t.Error("expected non-empty bin path")
}
}
func TestServerCmd(t *testing.T) {
os.Unsetenv("OLLAMA_HOST")
os.Unsetenv("OLLAMA_ORIGINS")
os.Unsetenv("OLLAMA_MODELS")
var defaultModels string
home, err := os.UserHomeDir()
if err == nil {
defaultModels = filepath.Join(home, ".ollama", "models")
os.MkdirAll(defaultModels, 0o755)
}
tmpModels := t.TempDir()
tests := []struct {
name string
settings store.Settings
want []string
dont []string
}{
{
name: "default",
settings: store.Settings{},
want: []string{"OLLAMA_MODELS=" + defaultModels},
dont: []string{"OLLAMA_HOST=", "OLLAMA_ORIGINS="},
},
{
name: "expose",
settings: store.Settings{Expose: true},
want: []string{"OLLAMA_HOST=0.0.0.0", "OLLAMA_MODELS=" + defaultModels},
dont: []string{"OLLAMA_ORIGINS="},
},
{
name: "browser",
settings: store.Settings{Browser: true},
want: []string{"OLLAMA_ORIGINS=*", "OLLAMA_MODELS=" + defaultModels},
dont: []string{"OLLAMA_HOST="},
},
{
name: "models",
settings: store.Settings{Models: tmpModels},
want: []string{"OLLAMA_MODELS=" + tmpModels},
dont: []string{"OLLAMA_HOST=", "OLLAMA_ORIGINS="},
},
{
name: "inaccessible_models",
settings: store.Settings{Models: "/nonexistent/external/drive/models"},
want: []string{},
dont: []string{"OLLAMA_MODELS="},
},
{
name: "all",
settings: store.Settings{
Expose: true,
Browser: true,
Models: tmpModels,
},
want: []string{
"OLLAMA_HOST=0.0.0.0",
"OLLAMA_ORIGINS=*",
"OLLAMA_MODELS=" + tmpModels,
},
dont: []string{},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tmpDir := t.TempDir()
st := &store.Store{DBPath: filepath.Join(tmpDir, "db.sqlite")}
defer st.Close() // Ensure database is closed before cleanup
st.SetSettings(tt.settings)
s := &Server{
store: st,
}
cmd, err := s.cmd(t.Context())
if err != nil {
t.Fatalf("s.cmd() error = %v", err)
}
for _, want := range tt.want {
found := false
for _, env := range cmd.Env {
if strings.Contains(env, want) {
found = true
break
}
}
if !found {
t.Errorf("expected environment variable containing %s", want)
}
}
for _, dont := range tt.dont {
for _, env := range cmd.Env {
if strings.Contains(env, dont) {
t.Errorf("unexpected environment variable: %s", env)
}
}
}
if cmd.Cancel == nil {
t.Error("expected non-nil cancel function")
}
})
}
}
func TestGetInferenceComputer(t *testing.T) {
tests := []struct {
name string
log string
exp []InferenceCompute
}{
{
name: "metal",
log: `time=2025-06-30T09:23:07.374-07:00 level=DEBUG source=sched.go:108 msg="starting llm scheduler"
time=2025-06-30T09:23:07.416-07:00 level=INFO source=types.go:130 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="96.0 GiB" available="96.0 GiB"
time=2025-06-30T09:25:56.197-07:00 level=DEBUG source=ggml.go:155 msg="key not found" key=general.alignment default=32
`,
exp: []InferenceCompute{{
Library: "metal",
Driver: "0.0",
VRAM: "96.0 GiB",
}},
},
{
name: "cpu",
log: `time=2025-07-01T17:59:51.470Z level=INFO source=gpu.go:377 msg="no compatible GPUs were discovered"
time=2025-07-01T17:59:51.470Z level=INFO source=types.go:130 msg="inference compute" id=0 library=cpu variant="" compute="" driver=0.0 name="" total="31.3 GiB" available="30.4 GiB"
[GIN] 2025/07/01 - 18:00:09 | 200 | 50.263µs | 100.126.204.152 | HEAD "/"
`,
exp: []InferenceCompute{{
Library: "cpu",
Driver: "0.0",
VRAM: "31.3 GiB",
}},
},
{
name: "cuda1",
log: `time=2025-07-01T19:33:43.162Z level=DEBUG source=amd_linux.go:419 msg="amdgpu driver not detected /sys/module/amdgpu"
releasing cuda driver library
time=2025-07-01T19:33:43.162Z level=INFO source=types.go:130 msg="inference compute" id=GPU-452cac9f-6960-839c-4fb3-0cec83699196 library=cuda variant=v12 compute=6.1 driver=12.7 name="NVIDIA GeForce GT 1030" total="3.9 GiB" available="3.9 GiB"
[GIN] 2025/07/01 - 18:00:09 | 200 | 50.263µs | 100.126.204.152 | HEAD "/"
`,
exp: []InferenceCompute{{
Library: "cuda",
Variant: "v12",
Compute: "6.1",
Driver: "12.7",
Name: "NVIDIA GeForce GT 1030",
VRAM: "3.9 GiB",
}},
},
{
name: "frank",
log: `time=2025-07-01T19:36:13.315Z level=INFO source=amd_linux.go:386 msg="amdgpu is supported" gpu=GPU-9abb57639fa80c50 gpu_type=gfx1030
releasing cuda driver library
time=2025-07-01T19:36:13.315Z level=INFO source=types.go:130 msg="inference compute" id=GPU-d6de3398-9932-6902-11ec-fee8e424c8a2 library=cuda variant=v12 compute=7.5 driver=12.8 name="NVIDIA GeForce RTX 2080 Ti" total="10.6 GiB" available="10.4 GiB"
time=2025-07-01T19:36:13.315Z level=INFO source=types.go:130 msg="inference compute" id=GPU-9abb57639fa80c50 library=rocm variant="" compute=gfx1030 driver=6.3 name=1002:73bf total="16.0 GiB" available="1.3 GiB"
[GIN] 2025/07/01 - 18:00:09 | 200 | 50.263µs | 100.126.204.152 | HEAD "/"
`,
exp: []InferenceCompute{
{
Library: "cuda",
Variant: "v12",
Compute: "7.5",
Driver: "12.8",
Name: "NVIDIA GeForce RTX 2080 Ti",
VRAM: "10.6 GiB",
},
{
Library: "rocm",
Compute: "gfx1030",
Driver: "6.3",
Name: "1002:73bf",
VRAM: "16.0 GiB",
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tmpDir := t.TempDir()
serverLogPath = filepath.Join(tmpDir, "server.log")
err := os.WriteFile(serverLogPath, []byte(tt.log), 0o644)
if err != nil {
t.Fatalf("failed to write log file %s: %s", serverLogPath, err)
}
ctx, cancel := context.WithTimeout(t.Context(), 10*time.Millisecond)
defer cancel()
ics, err := GetInferenceComputer(ctx)
if err != nil {
t.Fatalf(" failed to get inference compute: %v", err)
}
if !reflect.DeepEqual(ics, tt.exp) {
t.Fatalf("got:\n%#v\nwant:\n%#v", ics, tt.exp)
}
})
}
}
func TestGetInferenceComputerTimeout(t *testing.T) {
ctx, cancel := context.WithTimeout(t.Context(), 10*time.Millisecond)
defer cancel()
tmpDir := t.TempDir()
serverLogPath = filepath.Join(tmpDir, "server.log")
err := os.WriteFile(serverLogPath, []byte("foo\nbar\nbaz\n"), 0o644)
if err != nil {
t.Fatalf("failed to write log file %s: %s", serverLogPath, err)
}
_, err = GetInferenceComputer(ctx)
if err == nil {
t.Fatal("expected timeout")
}
if !strings.Contains(err.Error(), "timeout") {
t.Fatalf("unexpected error: %s", err)
}
}

104
app/server/server_unix.go Normal file
View File

@@ -0,0 +1,104 @@
//go:build darwin
package server
import (
"context"
"errors"
"fmt"
"log/slog"
"os"
"os/exec"
"path/filepath"
"strconv"
"strings"
"syscall"
)
var (
pidFile = filepath.Join(os.Getenv("HOME"), "Library", "Application Support", "Ollama", "ollama.pid")
serverLogPath = filepath.Join(os.Getenv("HOME"), ".ollama", "logs", "server.log")
)
func commandContext(ctx context.Context, name string, arg ...string) *exec.Cmd {
return exec.CommandContext(ctx, name, arg...)
}
func terminate(proc *os.Process) error {
return proc.Signal(os.Interrupt)
}
func terminated(pid int) (bool, error) {
proc, err := os.FindProcess(pid)
if err != nil {
return false, fmt.Errorf("failed to find process: %v", err)
}
err = proc.Signal(syscall.Signal(0))
if err != nil {
if errors.Is(err, os.ErrProcessDone) || errors.Is(err, syscall.ESRCH) {
return true, nil
}
return false, fmt.Errorf("error signaling process: %v", err)
}
return false, nil
}
// reapServers kills all ollama processes except our own
func reapServers() error {
// Get our own PID to avoid killing ourselves
currentPID := os.Getpid()
// Use pkill to kill ollama processes
// -x matches the whole command name exactly
// We'll get the list first, then kill selectively
cmd := exec.Command("pgrep", "-x", "ollama")
output, err := cmd.Output()
if err != nil {
// No ollama processes found
slog.Debug("no ollama processes found")
return nil //nolint:nilerr
}
pidsStr := strings.TrimSpace(string(output))
if pidsStr == "" {
return nil
}
pids := strings.Split(pidsStr, "\n")
for _, pidStr := range pids {
pidStr = strings.TrimSpace(pidStr)
if pidStr == "" {
continue
}
pid, err := strconv.Atoi(pidStr)
if err != nil {
slog.Debug("failed to parse PID", "pidStr", pidStr, "err", err)
continue
}
if pid == currentPID {
continue
}
proc, err := os.FindProcess(pid)
if err != nil {
slog.Debug("failed to find process", "pid", pid, "err", err)
continue
}
if err := proc.Signal(syscall.SIGTERM); err != nil {
// Try SIGKILL if SIGTERM fails
if err := proc.Signal(syscall.SIGKILL); err != nil {
slog.Warn("failed to stop external ollama process", "pid", pid, "err", err)
continue
}
}
slog.Info("stopped external ollama process", "pid", pid)
}
return nil
}

View File

@@ -0,0 +1,149 @@
package server
import (
"context"
"fmt"
"log/slog"
"os"
"os/exec"
"path/filepath"
"strconv"
"strings"
"syscall"
"golang.org/x/sys/windows"
)
var (
pidFile = filepath.Join(os.Getenv("LOCALAPPDATA"), "Ollama", "ollama.pid")
serverLogPath = filepath.Join(os.Getenv("LOCALAPPDATA"), "Ollama", "server.log")
)
func commandContext(ctx context.Context, name string, arg ...string) *exec.Cmd {
cmd := exec.CommandContext(ctx, name, arg...)
cmd.SysProcAttr = &syscall.SysProcAttr{
HideWindow: true,
CreationFlags: windows.CREATE_NEW_PROCESS_GROUP,
}
return cmd
}
func terminate(proc *os.Process) error {
dll, err := windows.LoadDLL("kernel32.dll")
if err != nil {
return err
}
defer dll.Release()
pid := proc.Pid
f, err := dll.FindProc("AttachConsole")
if err != nil {
return err
}
r1, _, err := f.Call(uintptr(pid))
if r1 == 0 && err != syscall.ERROR_ACCESS_DENIED {
return err
}
f, err = dll.FindProc("SetConsoleCtrlHandler")
if err != nil {
return err
}
r1, _, err = f.Call(0, 1)
if r1 == 0 {
return err
}
f, err = dll.FindProc("GenerateConsoleCtrlEvent")
if err != nil {
return err
}
r1, _, err = f.Call(windows.CTRL_BREAK_EVENT, uintptr(pid))
if r1 == 0 {
return err
}
r1, _, err = f.Call(windows.CTRL_C_EVENT, uintptr(pid))
if r1 == 0 {
return err
}
return nil
}
const STILL_ACTIVE = 259
func terminated(pid int) (bool, error) {
hProcess, err := windows.OpenProcess(windows.PROCESS_QUERY_INFORMATION, false, uint32(pid))
if err != nil {
if errno, ok := err.(windows.Errno); ok && errno == windows.ERROR_INVALID_PARAMETER {
return true, nil
}
return false, fmt.Errorf("failed to open process: %v", err)
}
defer windows.CloseHandle(hProcess)
var exitCode uint32
err = windows.GetExitCodeProcess(hProcess, &exitCode)
if err != nil {
return false, fmt.Errorf("failed to get exit code: %v", err)
}
if exitCode == STILL_ACTIVE {
return false, nil
}
return true, nil
}
// reapServers kills all ollama processes except our own
func reapServers() error {
// Get current process ID to avoid killing ourselves
currentPID := os.Getpid()
// Use wmic to find ollama processes
cmd := exec.Command("wmic", "process", "where", "name='ollama.exe'", "get", "ProcessId")
cmd.SysProcAttr = &syscall.SysProcAttr{HideWindow: true}
output, err := cmd.Output()
if err != nil {
// No ollama processes found
slog.Debug("no ollama processes found")
return nil //nolint:nilerr
}
lines := strings.Split(string(output), "\n")
var pids []string
for _, line := range lines {
line = strings.TrimSpace(line)
if line == "" || line == "ProcessId" {
continue
}
if _, err := strconv.Atoi(line); err == nil {
pids = append(pids, line)
}
}
for _, pidStr := range pids {
pid, err := strconv.Atoi(pidStr)
if err != nil {
continue
}
if pid == currentPID {
continue
}
cmd := exec.Command("taskkill", "/F", "/PID", pidStr)
if err := cmd.Run(); err != nil {
slog.Warn("failed to kill ollama process", "pid", pid, "err", err)
}
}
return nil
}

1222
app/store/database.go Normal file

File diff suppressed because it is too large Load Diff

407
app/store/database_test.go Normal file
View File

@@ -0,0 +1,407 @@
//go:build windows || darwin
package store
import (
"database/sql"
"fmt"
"os"
"path/filepath"
"sort"
"strings"
"testing"
"time"
"github.com/google/go-cmp/cmp"
_ "github.com/mattn/go-sqlite3"
)
func TestSchemaMigrations(t *testing.T) {
t.Run("schema comparison after migration", func(t *testing.T) {
tmpDir := t.TempDir()
migratedDBPath := filepath.Join(tmpDir, "migrated.db")
migratedDB := loadV2Schema(t, migratedDBPath)
defer migratedDB.Close()
if err := migratedDB.migrate(); err != nil {
t.Fatalf("migration failed: %v", err)
}
// Create fresh database with current schema
freshDBPath := filepath.Join(tmpDir, "fresh.db")
freshDB, err := newDatabase(freshDBPath)
if err != nil {
t.Fatalf("failed to create fresh database: %v", err)
}
defer freshDB.Close()
// Extract tables and indexes from both databases, directly comparing their schemas won't work due to ordering
migratedSchema := schemaMap(migratedDB)
freshSchema := schemaMap(freshDB)
if !cmp.Equal(migratedSchema, freshSchema) {
t.Errorf("Schema difference found:\n%s", cmp.Diff(freshSchema, migratedSchema))
}
// Verify both databases have the same final schema version
migratedVersion, _ := migratedDB.getSchemaVersion()
freshVersion, _ := freshDB.getSchemaVersion()
if migratedVersion != freshVersion {
t.Errorf("schema version mismatch: migrated=%d, fresh=%d", migratedVersion, freshVersion)
}
})
t.Run("idempotent migrations", func(t *testing.T) {
tmpDir := t.TempDir()
dbPath := filepath.Join(tmpDir, "test.db")
db := loadV2Schema(t, dbPath)
defer db.Close()
// Run migration twice
if err := db.migrate(); err != nil {
t.Fatalf("first migration failed: %v", err)
}
if err := db.migrate(); err != nil {
t.Fatalf("second migration failed: %v", err)
}
// Verify schema version is still correct
version, err := db.getSchemaVersion()
if err != nil {
t.Fatalf("failed to get schema version: %v", err)
}
if version != currentSchemaVersion {
t.Errorf("expected schema version %d after double migration, got %d", currentSchemaVersion, version)
}
})
t.Run("init database has correct schema version", func(t *testing.T) {
tmpDir := t.TempDir()
dbPath := filepath.Join(tmpDir, "test.db")
db, err := newDatabase(dbPath)
if err != nil {
t.Fatalf("failed to create database: %v", err)
}
defer db.Close()
// Get the schema version from the newly initialized database
version, err := db.getSchemaVersion()
if err != nil {
t.Fatalf("failed to get schema version: %v", err)
}
// Verify it matches the currentSchemaVersion constant
if version != currentSchemaVersion {
t.Errorf("expected schema version %d in initialized database, got %d", currentSchemaVersion, version)
}
})
}
func TestChatDeletionWithCascade(t *testing.T) {
t.Run("chat deletion cascades to related messages", func(t *testing.T) {
tmpDir := t.TempDir()
dbPath := filepath.Join(tmpDir, "test.db")
db, err := newDatabase(dbPath)
if err != nil {
t.Fatalf("failed to create database: %v", err)
}
defer db.Close()
// Create test chat
testChatID := "test-chat-cascade-123"
testChat := Chat{
ID: testChatID,
Title: "Test Chat for Cascade Delete",
CreatedAt: time.Now(),
Messages: []Message{
{
Role: "user",
Content: "Hello, this is a test message",
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
},
{
Role: "assistant",
Content: "Hi there! This is a response.",
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
},
},
}
// Save the chat with messages
if err := db.saveChat(testChat); err != nil {
t.Fatalf("failed to save test chat: %v", err)
}
// Verify chat and messages exist
chatCount := countRows(t, db, "chats")
messageCount := countRows(t, db, "messages")
if chatCount != 1 {
t.Errorf("expected 1 chat, got %d", chatCount)
}
if messageCount != 2 {
t.Errorf("expected 2 messages, got %d", messageCount)
}
// Verify specific chat exists
var exists bool
err = db.conn.QueryRow("SELECT EXISTS(SELECT 1 FROM chats WHERE id = ?)", testChatID).Scan(&exists)
if err != nil {
t.Fatalf("failed to check chat existence: %v", err)
}
if !exists {
t.Error("test chat should exist before deletion")
}
// Verify messages exist for this chat
messageCountForChat := countRowsWithCondition(t, db, "messages", "chat_id = ?", testChatID)
if messageCountForChat != 2 {
t.Errorf("expected 2 messages for test chat, got %d", messageCountForChat)
}
// Delete the chat
if err := db.deleteChat(testChatID); err != nil {
t.Fatalf("failed to delete chat: %v", err)
}
// Verify chat is deleted
chatCountAfter := countRows(t, db, "chats")
if chatCountAfter != 0 {
t.Errorf("expected 0 chats after deletion, got %d", chatCountAfter)
}
// Verify messages are CASCADE deleted
messageCountAfter := countRows(t, db, "messages")
if messageCountAfter != 0 {
t.Errorf("expected 0 messages after CASCADE deletion, got %d", messageCountAfter)
}
// Verify specific chat no longer exists
err = db.conn.QueryRow("SELECT EXISTS(SELECT 1 FROM chats WHERE id = ?)", testChatID).Scan(&exists)
if err != nil {
t.Fatalf("failed to check chat existence after deletion: %v", err)
}
if exists {
t.Error("test chat should not exist after deletion")
}
// Verify no orphaned messages remain
orphanedCount := countRowsWithCondition(t, db, "messages", "chat_id = ?", testChatID)
if orphanedCount != 0 {
t.Errorf("expected 0 orphaned messages, got %d", orphanedCount)
}
})
t.Run("foreign keys are enabled", func(t *testing.T) {
tmpDir := t.TempDir()
dbPath := filepath.Join(tmpDir, "test.db")
db, err := newDatabase(dbPath)
if err != nil {
t.Fatalf("failed to create database: %v", err)
}
defer db.Close()
// Verify foreign keys are enabled
var foreignKeysEnabled int
err = db.conn.QueryRow("PRAGMA foreign_keys").Scan(&foreignKeysEnabled)
if err != nil {
t.Fatalf("failed to check foreign keys: %v", err)
}
if foreignKeysEnabled != 1 {
t.Errorf("expected foreign keys to be enabled (1), got %d", foreignKeysEnabled)
}
})
// This test is only relevant for v8 migrations, but we keep it here for now
// since it's a useful test to ensure that we don't introduce any new orphaned data
t.Run("cleanup orphaned data", func(t *testing.T) {
tmpDir := t.TempDir()
dbPath := filepath.Join(tmpDir, "test.db")
db, err := newDatabase(dbPath)
if err != nil {
t.Fatalf("failed to create database: %v", err)
}
defer db.Close()
// First disable foreign keys to simulate the bug from ollama/ollama#11785
_, err = db.conn.Exec("PRAGMA foreign_keys = OFF")
if err != nil {
t.Fatalf("failed to disable foreign keys: %v", err)
}
// Create a chat and message
testChatID := "orphaned-test-chat"
testMessageID := int64(999)
_, err = db.conn.Exec("INSERT INTO chats (id, title) VALUES (?, ?)", testChatID, "Orphaned Test Chat")
if err != nil {
t.Fatalf("failed to insert test chat: %v", err)
}
_, err = db.conn.Exec("INSERT INTO messages (id, chat_id, role, content) VALUES (?, ?, ?, ?)",
testMessageID, testChatID, "user", "test message")
if err != nil {
t.Fatalf("failed to insert test message: %v", err)
}
// Delete chat but keep message (simulating the bug from ollama/ollama#11785)
_, err = db.conn.Exec("DELETE FROM chats WHERE id = ?", testChatID)
if err != nil {
t.Fatalf("failed to delete chat: %v", err)
}
// Verify we have orphaned message
orphanedCount := countRowsWithCondition(t, db, "messages", "chat_id = ?", testChatID)
if orphanedCount != 1 {
t.Errorf("expected 1 orphaned message, got %d", orphanedCount)
}
// Run cleanup
if err := db.cleanupOrphanedData(); err != nil {
t.Fatalf("failed to cleanup orphaned data: %v", err)
}
// Verify orphaned message is gone
orphanedCountAfter := countRowsWithCondition(t, db, "messages", "chat_id = ?", testChatID)
if orphanedCountAfter != 0 {
t.Errorf("expected 0 orphaned messages after cleanup, got %d", orphanedCountAfter)
}
})
}
func countRows(t *testing.T, db *database, table string) int {
t.Helper()
var count int
err := db.conn.QueryRow(fmt.Sprintf("SELECT COUNT(*) FROM %s", table)).Scan(&count)
if err != nil {
t.Fatalf("failed to count rows in %s: %v", table, err)
}
return count
}
func countRowsWithCondition(t *testing.T, db *database, table, condition string, args ...interface{}) int {
t.Helper()
var count int
query := fmt.Sprintf("SELECT COUNT(*) FROM %s WHERE %s", table, condition)
err := db.conn.QueryRow(query, args...).Scan(&count)
if err != nil {
t.Fatalf("failed to count rows with condition: %v", err)
}
return count
}
// Test helpers for schema migration testing
// schemaMap returns both tables/columns and indexes (ignoring order)
func schemaMap(db *database) map[string]interface{} {
result := make(map[string]any)
result["tables"] = columnMap(db)
result["indexes"] = indexMap(db)
return result
}
// columnMap returns a map of table names to their column sets (ignoring order)
func columnMap(db *database) map[string][]string {
result := make(map[string][]string)
// Get all table names
tableQuery := `SELECT name FROM sqlite_master WHERE type='table' AND name NOT LIKE 'sqlite_%' ORDER BY name`
rows, _ := db.conn.Query(tableQuery)
defer rows.Close()
for rows.Next() {
var tableName string
rows.Scan(&tableName)
// Get columns for this table
colQuery := fmt.Sprintf("PRAGMA table_info(%s)", tableName)
colRows, _ := db.conn.Query(colQuery)
var columns []string
for colRows.Next() {
var cid int
var name, dataType sql.NullString
var notNull, primaryKey int
var defaultValue sql.NullString
colRows.Scan(&cid, &name, &dataType, &notNull, &defaultValue, &primaryKey)
// Create a normalized column description
colDesc := fmt.Sprintf("%s %s", name.String, dataType.String)
if notNull == 1 {
colDesc += " NOT NULL"
}
if defaultValue.Valid && defaultValue.String != "" {
// Skip DEFAULT for schema_version as it doesn't get updated during migrations
if name.String != "schema_version" {
colDesc += " DEFAULT " + defaultValue.String
}
}
if primaryKey == 1 {
colDesc += " PRIMARY KEY"
}
columns = append(columns, colDesc)
}
colRows.Close()
// Sort columns to ignore order differences
sort.Strings(columns)
result[tableName] = columns
}
return result
}
// indexMap returns a map of index names to their definitions
func indexMap(db *database) map[string]string {
result := make(map[string]string)
// Get all indexes (excluding auto-created primary key indexes)
indexQuery := `SELECT name, sql FROM sqlite_master WHERE type='index' AND name NOT LIKE 'sqlite_%' AND sql IS NOT NULL ORDER BY name`
rows, _ := db.conn.Query(indexQuery)
defer rows.Close()
for rows.Next() {
var name, sql string
rows.Scan(&name, &sql)
// Normalize the SQL by removing extra whitespace
sql = strings.Join(strings.Fields(sql), " ")
result[name] = sql
}
return result
}
// loadV2Schema loads the version 2 schema from testdata/schema.sql
func loadV2Schema(t *testing.T, dbPath string) *database {
t.Helper()
// Read the v1 schema file
schemaFile := filepath.Join("testdata", "schema.sql")
schemaSQL, err := os.ReadFile(schemaFile)
if err != nil {
t.Fatalf("failed to read schema file: %v", err)
}
// Open database connection
conn, err := sql.Open("sqlite3", dbPath+"?_foreign_keys=on&_journal_mode=WAL&_busy_timeout=5000&_txlock=immediate")
if err != nil {
t.Fatalf("failed to open database: %v", err)
}
// Execute the v1 schema
_, err = conn.Exec(string(schemaSQL))
if err != nil {
conn.Close()
t.Fatalf("failed to execute v1 schema: %v", err)
}
return &database{conn: conn}
}

128
app/store/image.go Normal file
View File

@@ -0,0 +1,128 @@
//go:build windows || darwin
package store
import (
"crypto/sha256"
"encoding/hex"
"fmt"
"os"
"path/filepath"
"strings"
)
type Image struct {
Filename string `json:"filename"`
Path string `json:"path"`
Size int64 `json:"size,omitempty"`
MimeType string `json:"mime_type,omitempty"`
}
// Bytes loads image data from disk for a given ImageData reference
func (i *Image) Bytes() ([]byte, error) {
return ImgBytes(i.Path)
}
// ImgBytes reads image data from the specified file path
func ImgBytes(path string) ([]byte, error) {
if path == "" {
return nil, fmt.Errorf("empty image path")
}
data, err := os.ReadFile(path)
if err != nil {
return nil, fmt.Errorf("read image file %s: %w", path, err)
}
return data, nil
}
// ImgDir returns the directory path for storing images for a specific chat
func (s *Store) ImgDir() string {
dbPath := s.DBPath
if dbPath == "" {
dbPath = defaultDBPath
}
storeDir := filepath.Dir(dbPath)
return filepath.Join(storeDir, "cache", "images")
}
// ImgToFile saves image data to disk and returns ImageData reference
func (s *Store) ImgToFile(chatID string, imageBytes []byte, filename, mimeType string) (Image, error) {
baseImageDir := s.ImgDir()
if err := os.MkdirAll(baseImageDir, 0o755); err != nil {
return Image{}, fmt.Errorf("create base image directory: %w", err)
}
// Root prevents path traversal issues
root, err := os.OpenRoot(baseImageDir)
if err != nil {
return Image{}, fmt.Errorf("open image root directory: %w", err)
}
defer root.Close()
// Create chat-specific subdirectory within the root
chatDir := sanitize(chatID)
if err := root.Mkdir(chatDir, 0o755); err != nil && !os.IsExist(err) {
return Image{}, fmt.Errorf("create chat directory: %w", err)
}
// Generate a unique filename to avoid conflicts
// Use hash of content + original filename for uniqueness
hash := sha256.Sum256(imageBytes)
hashStr := hex.EncodeToString(hash[:])[:16] // Use first 16 chars of hash
// Extract file extension from original filename or mime type
ext := filepath.Ext(filename)
if ext == "" {
switch mimeType {
case "image/jpeg":
ext = ".jpg"
case "image/png":
ext = ".png"
case "image/webp":
ext = ".webp"
default:
ext = ".img"
}
}
// Create unique filename: hash + original name + extension
baseFilename := sanitize(strings.TrimSuffix(filename, ext))
uniqueFilename := fmt.Sprintf("%s_%s%s", hashStr, baseFilename, ext)
relativePath := filepath.Join(chatDir, uniqueFilename)
file, err := root.Create(relativePath)
if err != nil {
return Image{}, fmt.Errorf("create image file: %w", err)
}
defer file.Close()
if _, err := file.Write(imageBytes); err != nil {
return Image{}, fmt.Errorf("write image data: %w", err)
}
return Image{
Filename: uniqueFilename,
Path: filepath.Join(baseImageDir, relativePath),
Size: int64(len(imageBytes)),
MimeType: mimeType,
}, nil
}
// sanitize removes unsafe characters from filenames
func sanitize(filename string) string {
// Convert to safe characters only
safe := strings.Map(func(r rune) rune {
if (r >= 'a' && r <= 'z') || (r >= 'A' && r <= 'Z') || (r >= '0' && r <= '9') || r == '-' {
return r
}
return '_'
}, filename)
// Clean up and validate
safe = strings.Trim(safe, "_")
if safe == "" {
return "image"
}
return safe
}

231
app/store/migration_test.go Normal file
View File

@@ -0,0 +1,231 @@
//go:build windows || darwin
package store
import (
"database/sql"
"encoding/json"
"os"
"path/filepath"
"testing"
)
func TestConfigMigration(t *testing.T) {
tmpDir := t.TempDir()
// Create a legacy config.json
legacyConfig := legacyData{
ID: "test-device-id-12345",
FirstTimeRun: true, // In old system, true meant "has completed first run"
}
configData, err := json.MarshalIndent(legacyConfig, "", " ")
if err != nil {
t.Fatal(err)
}
configPath := filepath.Join(tmpDir, "config.json")
if err := os.WriteFile(configPath, configData, 0o644); err != nil {
t.Fatal(err)
}
// Override the legacy config path for testing
oldLegacyConfigPath := legacyConfigPath
legacyConfigPath = configPath
defer func() { legacyConfigPath = oldLegacyConfigPath }()
// Create store with database in same directory
s := Store{DBPath: filepath.Join(tmpDir, "db.sqlite")}
defer s.Close()
// First access should trigger migration
id, err := s.ID()
if err != nil {
t.Fatalf("failed to get ID: %v", err)
}
if id != "test-device-id-12345" {
t.Errorf("expected migrated ID 'test-device-id-12345', got '%s'", id)
}
// Check HasCompletedFirstRun
hasCompleted, err := s.HasCompletedFirstRun()
if err != nil {
t.Fatalf("failed to get has completed first run: %v", err)
}
if !hasCompleted {
t.Error("expected has completed first run to be true after migration")
}
// Verify migration is marked as complete
migrated, err := s.db.isConfigMigrated()
if err != nil {
t.Fatalf("failed to check migration status: %v", err)
}
if !migrated {
t.Error("expected config to be marked as migrated")
}
// Create a new store instance to verify migration doesn't run again
s2 := Store{DBPath: filepath.Join(tmpDir, "db.sqlite")}
defer s2.Close()
// Delete the config file to ensure we're not reading from it
os.Remove(configPath)
// Verify data is still there
id2, err := s2.ID()
if err != nil {
t.Fatalf("failed to get ID from second store: %v", err)
}
if id2 != "test-device-id-12345" {
t.Errorf("expected persisted ID 'test-device-id-12345', got '%s'", id2)
}
}
func TestNoConfigToMigrate(t *testing.T) {
tmpDir := t.TempDir()
// Override the legacy config path for testing
oldLegacyConfigPath := legacyConfigPath
legacyConfigPath = filepath.Join(tmpDir, "config.json")
defer func() { legacyConfigPath = oldLegacyConfigPath }()
// Create store without any config.json
s := Store{DBPath: filepath.Join(tmpDir, "db.sqlite")}
defer s.Close()
// Should generate a new ID
id, err := s.ID()
if err != nil {
t.Fatalf("failed to get ID: %v", err)
}
if id == "" {
t.Error("expected auto-generated ID, got empty string")
}
// HasCompletedFirstRun should be false (default)
hasCompleted, err := s.HasCompletedFirstRun()
if err != nil {
t.Fatalf("failed to get has completed first run: %v", err)
}
if hasCompleted {
t.Error("expected has completed first run to be false by default")
}
// Migration should still be marked as complete
migrated, err := s.db.isConfigMigrated()
if err != nil {
t.Fatalf("failed to check migration status: %v", err)
}
if !migrated {
t.Error("expected config to be marked as migrated even with no config.json")
}
}
const (
v1Schema = `
CREATE TABLE IF NOT EXISTS settings (
id INTEGER PRIMARY KEY CHECK (id = 1),
device_id TEXT NOT NULL DEFAULT '',
has_completed_first_run BOOLEAN NOT NULL DEFAULT 0,
expose BOOLEAN NOT NULL DEFAULT 0,
browser BOOLEAN NOT NULL DEFAULT 0,
models TEXT NOT NULL DEFAULT '',
remote TEXT NOT NULL DEFAULT '',
agent BOOLEAN NOT NULL DEFAULT 0,
tools BOOLEAN NOT NULL DEFAULT 0,
working_dir TEXT NOT NULL DEFAULT '',
window_width INTEGER NOT NULL DEFAULT 0,
window_height INTEGER NOT NULL DEFAULT 0,
config_migrated BOOLEAN NOT NULL DEFAULT 0,
schema_version INTEGER NOT NULL DEFAULT 1
);
-- Insert default settings row if it doesn't exist
INSERT OR IGNORE INTO settings (id) VALUES (1);
CREATE TABLE IF NOT EXISTS chats (
id TEXT PRIMARY KEY,
title TEXT NOT NULL DEFAULT '',
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS messages (
id INTEGER PRIMARY KEY AUTOINCREMENT,
chat_id TEXT NOT NULL,
role TEXT NOT NULL,
content TEXT NOT NULL DEFAULT '',
thinking TEXT NOT NULL DEFAULT '',
stream BOOLEAN NOT NULL DEFAULT 0,
model_name TEXT,
model_cloud BOOLEAN,
model_ollama_host BOOLEAN,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
thinking_time_start TIMESTAMP,
thinking_time_end TIMESTAMP,
FOREIGN KEY (chat_id) REFERENCES chats(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_messages_chat_id ON messages(chat_id);
CREATE TABLE IF NOT EXISTS tool_calls (
id INTEGER PRIMARY KEY AUTOINCREMENT,
message_id INTEGER NOT NULL,
type TEXT NOT NULL,
function_name TEXT NOT NULL,
function_arguments TEXT NOT NULL,
function_result TEXT,
FOREIGN KEY (message_id) REFERENCES messages(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_tool_calls_message_id ON tool_calls(message_id);
`
)
func TestMigrationFromEpoc(t *testing.T) {
tmpDir := t.TempDir()
s := Store{DBPath: filepath.Join(tmpDir, "db.sqlite")}
defer s.Close()
// Open database connection
conn, err := sql.Open("sqlite3", s.DBPath+"?_foreign_keys=on&_journal_mode=WAL")
if err != nil {
t.Fatal(err)
}
// Test the connection
if err := conn.Ping(); err != nil {
conn.Close()
t.Fatal(err)
}
s.db = &database{conn: conn}
t.Logf("DB created: %s", s.DBPath)
_, err = s.db.conn.Exec(v1Schema)
if err != nil {
t.Fatal(err)
}
version, err := s.db.getSchemaVersion()
if err != nil {
t.Fatalf("failed to get schema version: %v", err)
}
if version != 1 {
t.Fatalf("expected: %d\n got: %d", 1, version)
}
t.Logf("v1 schema created")
if err := s.db.migrate(); err != nil {
t.Fatal(err)
}
t.Logf("migrations completed")
version, err = s.db.getSchemaVersion()
if err != nil {
t.Fatalf("failed to get schema version: %v", err)
}
if version != currentSchemaVersion {
t.Fatalf("expected: %d\n got: %d", currentSchemaVersion, version)
}
}

61
app/store/schema.sql Normal file
View File

@@ -0,0 +1,61 @@
-- This is the version 2 schema for the app database, the first released schema to users.
-- Do not modify this file. It is used to test that the database schema stays in a consistent state between schema migrations.
CREATE TABLE IF NOT EXISTS settings (
id INTEGER PRIMARY KEY CHECK (id = 1),
device_id TEXT NOT NULL DEFAULT '',
has_completed_first_run BOOLEAN NOT NULL DEFAULT 0,
expose BOOLEAN NOT NULL DEFAULT 0,
survey BOOLEAN NOT NULL DEFAULT TRUE,
browser BOOLEAN NOT NULL DEFAULT 0,
models TEXT NOT NULL DEFAULT '',
remote TEXT NOT NULL DEFAULT '',
agent BOOLEAN NOT NULL DEFAULT 0,
tools BOOLEAN NOT NULL DEFAULT 0,
working_dir TEXT NOT NULL DEFAULT '',
context_length INTEGER NOT NULL DEFAULT 4096,
window_width INTEGER NOT NULL DEFAULT 0,
window_height INTEGER NOT NULL DEFAULT 0,
config_migrated BOOLEAN NOT NULL DEFAULT 0,
schema_version INTEGER NOT NULL DEFAULT 2
);
-- Insert default settings row if it doesn't exist
INSERT OR IGNORE INTO settings (id) VALUES (1);
CREATE TABLE IF NOT EXISTS chats (
id TEXT PRIMARY KEY,
title TEXT NOT NULL DEFAULT '',
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS messages (
id INTEGER PRIMARY KEY AUTOINCREMENT,
chat_id TEXT NOT NULL,
role TEXT NOT NULL,
content TEXT NOT NULL DEFAULT '',
thinking TEXT NOT NULL DEFAULT '',
stream BOOLEAN NOT NULL DEFAULT 0,
model_name TEXT,
model_cloud BOOLEAN,
model_ollama_host BOOLEAN,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
thinking_time_start TIMESTAMP,
thinking_time_end TIMESTAMP,
FOREIGN KEY (chat_id) REFERENCES chats(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_messages_chat_id ON messages(chat_id);
CREATE TABLE IF NOT EXISTS tool_calls (
id INTEGER PRIMARY KEY AUTOINCREMENT,
message_id INTEGER NOT NULL,
type TEXT NOT NULL,
function_name TEXT NOT NULL,
function_arguments TEXT NOT NULL,
function_result TEXT,
FOREIGN KEY (message_id) REFERENCES messages(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_tool_calls_message_id ON tool_calls(message_id);

60
app/store/schema_test.go Normal file
View File

@@ -0,0 +1,60 @@
//go:build windows || darwin
package store
import (
"path/filepath"
"testing"
)
func TestSchemaVersioning(t *testing.T) {
tmpDir := t.TempDir()
// Override legacy config path to avoid migration logs
oldLegacyConfigPath := legacyConfigPath
legacyConfigPath = filepath.Join(tmpDir, "config.json")
defer func() { legacyConfigPath = oldLegacyConfigPath }()
t.Run("new database has correct schema version", func(t *testing.T) {
dbPath := filepath.Join(tmpDir, "new_db.sqlite")
db, err := newDatabase(dbPath)
if err != nil {
t.Fatalf("failed to create database: %v", err)
}
defer db.Close()
// Check schema version
version, err := db.getSchemaVersion()
if err != nil {
t.Fatalf("failed to get schema version: %v", err)
}
if version != currentSchemaVersion {
t.Errorf("expected schema version %d, got %d", currentSchemaVersion, version)
}
})
t.Run("can update schema version", func(t *testing.T) {
dbPath := filepath.Join(tmpDir, "update_db.sqlite")
db, err := newDatabase(dbPath)
if err != nil {
t.Fatalf("failed to create database: %v", err)
}
defer db.Close()
// Set a different version
testVersion := 42
if err := db.setSchemaVersion(testVersion); err != nil {
t.Fatalf("failed to set schema version: %v", err)
}
// Verify it was updated
version, err := db.getSchemaVersion()
if err != nil {
t.Fatalf("failed to get schema version: %v", err)
}
if version != testVersion {
t.Errorf("expected schema version %d, got %d", testVersion, version)
}
})
}

View File

@@ -1,97 +1,495 @@
//go:build windows || darwin
// Package store provides a simple JSON file store for the desktop application
// to save and load data such as ollama server configuration, messages,
// login information and more.
package store
import (
"encoding/json"
"errors"
"fmt"
"log/slog"
"os"
"path/filepath"
"runtime"
"sync"
"time"
"github.com/google/uuid"
"github.com/ollama/ollama/app/types/not"
)
type File struct {
Filename string `json:"filename"`
Data []byte `json:"data"`
}
type User struct {
Name string `json:"name"`
Email string `json:"email"`
Plan string `json:"plan"`
CachedAt time.Time `json:"cachedAt"`
}
type Message struct {
Role string `json:"role"`
Content string `json:"content"`
Thinking string `json:"thinking"`
Stream bool `json:"stream"`
Model string `json:"model,omitempty"`
Attachments []File `json:"attachments,omitempty"`
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
ToolCall *ToolCall `json:"tool_call,omitempty"`
ToolName string `json:"tool_name,omitempty"`
ToolResult *json.RawMessage `json:"tool_result,omitempty"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
ThinkingTimeStart *time.Time `json:"thinkingTimeStart,omitempty" ts_type:"Date | undefined" ts_transform:"__VALUE__ && new Date(__VALUE__)"`
ThinkingTimeEnd *time.Time `json:"thinkingTimeEnd,omitempty" ts_type:"Date | undefined" ts_transform:"__VALUE__ && new Date(__VALUE__)"`
}
// MessageOptions contains optional parameters for creating a Message
type MessageOptions struct {
Model string
Attachments []File
Stream bool
Thinking string
ToolCalls []ToolCall
ToolCall *ToolCall
ToolResult *json.RawMessage
ThinkingTimeStart *time.Time
ThinkingTimeEnd *time.Time
}
// NewMessage creates a new Message with the given options
func NewMessage(role, content string, opts *MessageOptions) Message {
now := time.Now()
msg := Message{
Role: role,
Content: content,
CreatedAt: now,
UpdatedAt: now,
}
if opts != nil {
msg.Model = opts.Model
msg.Attachments = opts.Attachments
msg.Stream = opts.Stream
msg.Thinking = opts.Thinking
msg.ToolCalls = opts.ToolCalls
msg.ToolCall = opts.ToolCall
msg.ToolResult = opts.ToolResult
msg.ThinkingTimeStart = opts.ThinkingTimeStart
msg.ThinkingTimeEnd = opts.ThinkingTimeEnd
}
return msg
}
type ToolCall struct {
Type string `json:"type"`
Function ToolFunction `json:"function"`
}
type ToolFunction struct {
Name string `json:"name"`
Arguments string `json:"arguments"`
Result any `json:"result,omitempty"`
}
type Model struct {
Model string `json:"model"` // Model name
Digest string `json:"digest,omitempty"` // Model digest from the registry
ModifiedAt *time.Time `json:"modified_at,omitempty"` // When the model was last modified locally
}
type Chat struct {
ID string `json:"id"`
Messages []Message `json:"messages"`
Title string `json:"title"`
CreatedAt time.Time `json:"created_at"`
BrowserState json.RawMessage `json:"browser_state,omitempty" ts_type:"BrowserStateData"`
}
// NewChat creates a new Chat with the ID, with CreatedAt timestamp initialized
func NewChat(id string) *Chat {
return &Chat{
ID: id,
Messages: []Message{},
CreatedAt: time.Now(),
}
}
type Settings struct {
// Expose is a boolean that indicates if the ollama server should
// be exposed to the network
Expose bool
// Browser is a boolean that indicates if the ollama server should
// be exposed to browser windows (e.g. CORS set to allow all origins)
Browser bool
// Survey is a boolean that indicates if the user allows anonymous
// inference information to be shared with Ollama
Survey bool
// Models is a string that contains the models to load on startup
Models string
// TODO(parthsareen): temporary for experimentation
// Agent indicates if the app should use multi-turn tools to fulfill user requests
Agent bool
// Tools indicates if the app should use single-turn tools to fulfill user requests
Tools bool
// WorkingDir specifies the working directory for all agent operations
WorkingDir string
// ContextLength specifies the context length for the ollama server (using OLLAMA_CONTEXT_LENGTH)
ContextLength int
// AirplaneMode when true, turns off Ollama Turbo features and only uses local models
AirplaneMode bool
// TurboEnabled indicates if Ollama Turbo features are enabled
TurboEnabled bool
// Maps gpt-oss specific frontend name' BrowserToolEnabled' to db field 'websearch_enabled'
WebSearchEnabled bool
// ThinkEnabled indicates if thinking is enabled
ThinkEnabled bool
// ThinkLevel indicates the level of thinking to use for models that support multiple levels
ThinkLevel string
// SelectedModel stores the last model that the user selected
SelectedModel string
// SidebarOpen indicates if the chat sidebar is open
SidebarOpen bool
}
type Store struct {
// DBPath allows overriding the default database path (mainly for testing)
DBPath string
// dbMu protects database initialization only
dbMu sync.Mutex
db *database
}
var defaultDBPath = func() string {
switch runtime.GOOS {
case "windows":
return filepath.Join(os.Getenv("LOCALAPPDATA"), "Ollama", "db.sqlite")
case "darwin":
return filepath.Join(os.Getenv("HOME"), "Library", "Application Support", "Ollama", "db.sqlite")
default:
return filepath.Join(os.Getenv("HOME"), ".ollama", "db.sqlite")
}
}()
// legacyConfigPath is the path to the old config.json file
var legacyConfigPath = func() string {
switch runtime.GOOS {
case "windows":
return filepath.Join(os.Getenv("LOCALAPPDATA"), "Ollama", "config.json")
case "darwin":
return filepath.Join(os.Getenv("HOME"), "Library", "Application Support", "Ollama", "config.json")
default:
return filepath.Join(os.Getenv("HOME"), ".ollama", "config.json")
}
}()
// legacyData represents the old config.json structure (only fields we need to migrate)
type legacyData struct {
ID string `json:"id"`
FirstTimeRun bool `json:"first-time-run"`
}
var (
lock sync.Mutex
store Store
)
func GetID() string {
lock.Lock()
defer lock.Unlock()
if store.ID == "" {
initStore()
func (s *Store) ensureDB() error {
// Fast path: check if db is already initialized
if s.db != nil {
return nil
}
return store.ID
}
func GetFirstTimeRun() bool {
lock.Lock()
defer lock.Unlock()
if store.ID == "" {
initStore()
// Slow path: initialize database with lock
s.dbMu.Lock()
defer s.dbMu.Unlock()
// Double-check after acquiring lock
if s.db != nil {
return nil
}
return store.FirstTimeRun
}
func SetFirstTimeRun(val bool) {
lock.Lock()
defer lock.Unlock()
if store.FirstTimeRun == val {
return
dbPath := s.DBPath
if dbPath == "" {
dbPath = defaultDBPath
}
store.FirstTimeRun = val
writeStore(getStorePath())
}
// lock must be held
func initStore() {
storeFile, err := os.Open(getStorePath())
if err == nil {
defer storeFile.Close()
err = json.NewDecoder(storeFile).Decode(&store)
// Ensure directory exists
if err := os.MkdirAll(filepath.Dir(dbPath), 0o755); err != nil {
return fmt.Errorf("create db directory: %w", err)
}
database, err := newDatabase(dbPath)
if err != nil {
return fmt.Errorf("open database: %w", err)
}
// Generate device ID if needed
id, err := database.getID()
if err != nil || id == "" {
// Generate new UUID for device
u, err := uuid.NewV7()
if err == nil {
slog.Debug(fmt.Sprintf("loaded existing store %s - ID: %s", getStorePath(), store.ID))
return
database.setID(u.String())
}
} else if !errors.Is(err, os.ErrNotExist) {
slog.Debug(fmt.Sprintf("unexpected error searching for store: %s", err))
}
slog.Debug("initializing new store")
store.ID = uuid.NewString()
writeStore(getStorePath())
s.db = database
// Check if we need to migrate from config.json
migrated, err := database.isConfigMigrated()
if err != nil || !migrated {
if err := s.migrateFromConfig(database); err != nil {
slog.Warn("failed to migrate from config.json", "error", err)
}
}
return nil
}
func writeStore(storeFilename string) {
ollamaDir := filepath.Dir(storeFilename)
_, err := os.Stat(ollamaDir)
if errors.Is(err, os.ErrNotExist) {
if err := os.MkdirAll(ollamaDir, 0o755); err != nil {
slog.Error(fmt.Sprintf("create ollama dir %s: %v", ollamaDir, err))
return
// migrateFromConfig attempts to migrate ID and FirstTimeRun from config.json
func (s *Store) migrateFromConfig(database *database) error {
configPath := legacyConfigPath
// Check if config.json exists
if _, err := os.Stat(configPath); os.IsNotExist(err) {
// No config to migrate, mark as migrated
return database.setConfigMigrated(true)
}
// Read the config file
b, err := os.ReadFile(configPath)
if err != nil {
return fmt.Errorf("read legacy config: %w", err)
}
var legacy legacyData
if err := json.Unmarshal(b, &legacy); err != nil {
// If we can't parse it, just mark as migrated and move on
slog.Warn("failed to parse legacy config.json", "error", err)
return database.setConfigMigrated(true)
}
// Migrate the ID if present
if legacy.ID != "" {
if err := database.setID(legacy.ID); err != nil {
return fmt.Errorf("migrate device ID: %w", err)
}
slog.Info("migrated device ID from config.json")
}
hasCompleted := legacy.FirstTimeRun // If old FirstTimeRun is true, it means first run was completed
if err := database.setHasCompletedFirstRun(hasCompleted); err != nil {
return fmt.Errorf("migrate first time run: %w", err)
}
slog.Info("migrated first run status from config.json", "hasCompleted", hasCompleted)
// Mark as migrated
if err := database.setConfigMigrated(true); err != nil {
return fmt.Errorf("mark config as migrated: %w", err)
}
slog.Info("successfully migrated settings from config.json")
return nil
}
func (s *Store) ID() (string, error) {
if err := s.ensureDB(); err != nil {
return "", err
}
return s.db.getID()
}
func (s *Store) HasCompletedFirstRun() (bool, error) {
if err := s.ensureDB(); err != nil {
return false, err
}
return s.db.getHasCompletedFirstRun()
}
func (s *Store) SetHasCompletedFirstRun(hasCompleted bool) error {
if err := s.ensureDB(); err != nil {
return err
}
return s.db.setHasCompletedFirstRun(hasCompleted)
}
func (s *Store) Settings() (Settings, error) {
if err := s.ensureDB(); err != nil {
return Settings{}, fmt.Errorf("load settings: %w", err)
}
settings, err := s.db.getSettings()
if err != nil {
return Settings{}, err
}
// Set default models directory if not set
if settings.Models == "" {
dir := os.Getenv("OLLAMA_MODELS")
if dir != "" {
settings.Models = dir
} else {
home, err := os.UserHomeDir()
if err == nil {
settings.Models = filepath.Join(home, ".ollama", "models")
}
}
}
payload, err := json.Marshal(store)
if err != nil {
slog.Error(fmt.Sprintf("failed to marshal store: %s", err))
return
}
fp, err := os.OpenFile(storeFilename, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0o755)
if err != nil {
slog.Error(fmt.Sprintf("write store payload %s: %v", storeFilename, err))
return
}
defer fp.Close()
if n, err := fp.Write(payload); err != nil || n != len(payload) {
slog.Error(fmt.Sprintf("write store payload %s: %d vs %d -- %v", storeFilename, n, len(payload), err))
return
}
slog.Debug("Store contents: " + string(payload))
slog.Info(fmt.Sprintf("wrote store: %s", storeFilename))
return settings, nil
}
func (s *Store) SetSettings(settings Settings) error {
if err := s.ensureDB(); err != nil {
return err
}
return s.db.setSettings(settings)
}
func (s *Store) Chats() ([]Chat, error) {
if err := s.ensureDB(); err != nil {
return nil, err
}
return s.db.getAllChats()
}
func (s *Store) Chat(id string) (*Chat, error) {
return s.ChatWithOptions(id, true)
}
func (s *Store) ChatWithOptions(id string, loadAttachmentData bool) (*Chat, error) {
if err := s.ensureDB(); err != nil {
return nil, err
}
chat, err := s.db.getChatWithOptions(id, loadAttachmentData)
if err != nil {
return nil, fmt.Errorf("%w: chat %s", not.Found, id)
}
return chat, nil
}
func (s *Store) SetChat(chat Chat) error {
if err := s.ensureDB(); err != nil {
return err
}
return s.db.saveChat(chat)
}
func (s *Store) DeleteChat(id string) error {
if err := s.ensureDB(); err != nil {
return err
}
// Delete from database
if err := s.db.deleteChat(id); err != nil {
return fmt.Errorf("%w: chat %s", not.Found, id)
}
// Also delete associated images
chatImgDir := filepath.Join(s.ImgDir(), id)
if err := os.RemoveAll(chatImgDir); err != nil {
// Log error but don't fail the deletion
slog.Warn("failed to delete chat images", "chat_id", id, "error", err)
}
return nil
}
func (s *Store) WindowSize() (int, int, error) {
if err := s.ensureDB(); err != nil {
return 0, 0, err
}
return s.db.getWindowSize()
}
func (s *Store) SetWindowSize(width, height int) error {
if err := s.ensureDB(); err != nil {
return err
}
return s.db.setWindowSize(width, height)
}
func (s *Store) UpdateLastMessage(chatID string, message Message) error {
if err := s.ensureDB(); err != nil {
return err
}
return s.db.updateLastMessage(chatID, message)
}
func (s *Store) AppendMessage(chatID string, message Message) error {
if err := s.ensureDB(); err != nil {
return err
}
return s.db.appendMessage(chatID, message)
}
func (s *Store) UpdateChatBrowserState(chatID string, state json.RawMessage) error {
if err := s.ensureDB(); err != nil {
return err
}
return s.db.updateChatBrowserState(chatID, state)
}
func (s *Store) User() (*User, error) {
if err := s.ensureDB(); err != nil {
return nil, err
}
return s.db.getUser()
}
func (s *Store) SetUser(user User) error {
if err := s.ensureDB(); err != nil {
return err
}
user.CachedAt = time.Now()
return s.db.setUser(user)
}
func (s *Store) ClearUser() error {
if err := s.ensureDB(); err != nil {
return err
}
return s.db.clearUser()
}
func (s *Store) Close() error {
s.dbMu.Lock()
defer s.dbMu.Unlock()
if s.db != nil {
return s.db.Close()
}
return nil
}

View File

@@ -1,13 +0,0 @@
package store
import (
"os"
"path/filepath"
)
func getStorePath() string {
// TODO - system wide location?
home := os.Getenv("HOME")
return filepath.Join(home, "Library", "Application Support", "Ollama", "config.json")
}

View File

@@ -1,16 +0,0 @@
package store
import (
"os"
"path/filepath"
)
func getStorePath() string {
if os.Geteuid() == 0 {
// TODO where should we store this on linux for system-wide operation?
return "/etc/ollama/config.json"
}
home := os.Getenv("HOME")
return filepath.Join(home, ".ollama", "config.json")
}

192
app/store/store_test.go Normal file
View File

@@ -0,0 +1,192 @@
//go:build windows || darwin
package store
import (
"path/filepath"
"testing"
)
func TestStore(t *testing.T) {
s, cleanup := setupTestStore(t)
defer cleanup()
t.Run("default id", func(t *testing.T) {
// ID should be automatically generated
id, err := s.ID()
if err != nil {
t.Fatal(err)
}
if id == "" {
t.Error("expected non-empty ID")
}
// Verify ID is persisted
id2, err := s.ID()
if err != nil {
t.Fatal(err)
}
if id != id2 {
t.Errorf("expected ID %s, got %s", id, id2)
}
})
t.Run("has completed first run", func(t *testing.T) {
// Default should be false (hasn't completed first run yet)
hasCompleted, err := s.HasCompletedFirstRun()
if err != nil {
t.Fatal(err)
}
if hasCompleted {
t.Error("expected has completed first run to be false by default")
}
if err := s.SetHasCompletedFirstRun(true); err != nil {
t.Fatal(err)
}
hasCompleted, err = s.HasCompletedFirstRun()
if err != nil {
t.Fatal(err)
}
if !hasCompleted {
t.Error("expected has completed first run to be true")
}
})
t.Run("settings", func(t *testing.T) {
sc := Settings{
Expose: true,
Browser: true,
Survey: true,
Models: "/tmp/models",
Agent: true,
Tools: false,
WorkingDir: "/tmp/work",
}
if err := s.SetSettings(sc); err != nil {
t.Fatal(err)
}
loaded, err := s.Settings()
if err != nil {
t.Fatal(err)
}
// Compare fields individually since Models might get a default
if loaded.Expose != sc.Expose || loaded.Browser != sc.Browser ||
loaded.Agent != sc.Agent || loaded.Survey != sc.Survey ||
loaded.Tools != sc.Tools || loaded.WorkingDir != sc.WorkingDir {
t.Errorf("expected %v, got %v", sc, loaded)
}
})
t.Run("window size", func(t *testing.T) {
if err := s.SetWindowSize(1024, 768); err != nil {
t.Fatal(err)
}
width, height, err := s.WindowSize()
if err != nil {
t.Fatal(err)
}
if width != 1024 || height != 768 {
t.Errorf("expected 1024x768, got %dx%d", width, height)
}
})
t.Run("create and retrieve chat", func(t *testing.T) {
chat := NewChat("test-chat-1")
chat.Title = "Test Chat"
chat.Messages = append(chat.Messages, NewMessage("user", "Hello", nil))
chat.Messages = append(chat.Messages, NewMessage("assistant", "Hi there!", &MessageOptions{
Model: "llama4",
}))
if err := s.SetChat(*chat); err != nil {
t.Fatalf("failed to save chat: %v", err)
}
retrieved, err := s.Chat("test-chat-1")
if err != nil {
t.Fatalf("failed to retrieve chat: %v", err)
}
if retrieved.ID != chat.ID {
t.Errorf("expected ID %s, got %s", chat.ID, retrieved.ID)
}
if retrieved.Title != chat.Title {
t.Errorf("expected title %s, got %s", chat.Title, retrieved.Title)
}
if len(retrieved.Messages) != 2 {
t.Fatalf("expected 2 messages, got %d", len(retrieved.Messages))
}
if retrieved.Messages[0].Content != "Hello" {
t.Errorf("expected first message 'Hello', got %s", retrieved.Messages[0].Content)
}
if retrieved.Messages[1].Content != "Hi there!" {
t.Errorf("expected second message 'Hi there!', got %s", retrieved.Messages[1].Content)
}
})
t.Run("list chats", func(t *testing.T) {
chat2 := NewChat("test-chat-2")
chat2.Title = "Another Chat"
chat2.Messages = append(chat2.Messages, NewMessage("user", "Test", nil))
if err := s.SetChat(*chat2); err != nil {
t.Fatalf("failed to save chat: %v", err)
}
chats, err := s.Chats()
if err != nil {
t.Fatalf("failed to list chats: %v", err)
}
if len(chats) != 2 {
t.Fatalf("expected 2 chats, got %d", len(chats))
}
})
t.Run("delete chat", func(t *testing.T) {
if err := s.DeleteChat("test-chat-1"); err != nil {
t.Fatalf("failed to delete chat: %v", err)
}
// Verify it's gone
_, err := s.Chat("test-chat-1")
if err == nil {
t.Error("expected error retrieving deleted chat")
}
// Verify other chat still exists
chats, err := s.Chats()
if err != nil {
t.Fatalf("failed to list chats: %v", err)
}
if len(chats) != 1 {
t.Fatalf("expected 1 chat after deletion, got %d", len(chats))
}
})
}
// setupTestStore creates a temporary store for testing
func setupTestStore(t *testing.T) (*Store, func()) {
t.Helper()
tmpDir := t.TempDir()
// Override legacy config path to ensure no migration happens
oldLegacyConfigPath := legacyConfigPath
legacyConfigPath = filepath.Join(tmpDir, "config.json")
s := &Store{DBPath: filepath.Join(tmpDir, "db.sqlite")}
cleanup := func() {
s.Close()
legacyConfigPath = oldLegacyConfigPath
}
return s, cleanup
}

View File

@@ -1,11 +0,0 @@
package store
import (
"os"
"path/filepath"
)
func getStorePath() string {
localAppData := os.Getenv("LOCALAPPDATA")
return filepath.Join(localAppData, "Ollama", "config.json")
}

61
app/store/testdata/schema.sql vendored Normal file
View File

@@ -0,0 +1,61 @@
-- This is the version 2 schema for the app database, the first released schema to users.
-- Do not modify this file. It is used to test that the database schema stays in a consistent state between schema migrations.
CREATE TABLE IF NOT EXISTS settings (
id INTEGER PRIMARY KEY CHECK (id = 1),
device_id TEXT NOT NULL DEFAULT '',
has_completed_first_run BOOLEAN NOT NULL DEFAULT 0,
expose BOOLEAN NOT NULL DEFAULT 0,
survey BOOLEAN NOT NULL DEFAULT TRUE,
browser BOOLEAN NOT NULL DEFAULT 0,
models TEXT NOT NULL DEFAULT '',
remote TEXT NOT NULL DEFAULT '',
agent BOOLEAN NOT NULL DEFAULT 0,
tools BOOLEAN NOT NULL DEFAULT 0,
working_dir TEXT NOT NULL DEFAULT '',
context_length INTEGER NOT NULL DEFAULT 4096,
window_width INTEGER NOT NULL DEFAULT 0,
window_height INTEGER NOT NULL DEFAULT 0,
config_migrated BOOLEAN NOT NULL DEFAULT 0,
schema_version INTEGER NOT NULL DEFAULT 2
);
-- Insert default settings row if it doesn't exist
INSERT OR IGNORE INTO settings (id) VALUES (1);
CREATE TABLE IF NOT EXISTS chats (
id TEXT PRIMARY KEY,
title TEXT NOT NULL DEFAULT '',
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS messages (
id INTEGER PRIMARY KEY AUTOINCREMENT,
chat_id TEXT NOT NULL,
role TEXT NOT NULL,
content TEXT NOT NULL DEFAULT '',
thinking TEXT NOT NULL DEFAULT '',
stream BOOLEAN NOT NULL DEFAULT 0,
model_name TEXT,
model_cloud BOOLEAN,
model_ollama_host BOOLEAN,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
thinking_time_start TIMESTAMP,
thinking_time_end TIMESTAMP,
FOREIGN KEY (chat_id) REFERENCES chats(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_messages_chat_id ON messages(chat_id);
CREATE TABLE IF NOT EXISTS tool_calls (
id INTEGER PRIMARY KEY AUTOINCREMENT,
message_id INTEGER NOT NULL,
type TEXT NOT NULL,
function_name TEXT NOT NULL,
function_arguments TEXT NOT NULL,
function_result TEXT,
FOREIGN KEY (message_id) REFERENCES messages(id) ON DELETE CASCADE
);
CREATE INDEX IF NOT EXISTS idx_tool_calls_message_id ON tool_calls(message_id);

863
app/tools/browser.go Normal file
View File

@@ -0,0 +1,863 @@
//go:build windows || darwin
package tools
import (
"context"
"fmt"
"net/url"
"regexp"
"strings"
"sync"
"time"
"github.com/ollama/ollama/app/ui/responses"
)
type PageType string
const (
PageTypeSearchResults PageType = "initial_results"
PageTypeWebpage PageType = "webpage"
)
// DefaultViewTokens is the number of tokens to show to the model used when calling displayPage
const DefaultViewTokens = 1024
/*
The Browser tool provides web browsing capability for gpt-oss.
The model uses the tool by usually doing a search first and then choosing to either open a page,
find a term in a page, or do another search.
The tool optionally may open a URL directly - especially if one is passed in.
Each action is saved into an append-only page stack `responses.BrowserStateData` to keep
track of the history of the browsing session.
Each `Execute()` for a tool returns the full current state of the browser. ui.go manages the
browser state representation between the tool, ui, and db.
A new Browser object is created per request - the state is reconstructed by ui.go.
The initialization of the browser will receive a `responses.BrowserStateData` with the stitched history.
*/
// BrowserState manages the browsing session on a per-chat basis
type BrowserState struct {
mu sync.RWMutex
Data *responses.BrowserStateData
}
type Browser struct {
state *BrowserState
}
// State is only accessed in a single thread, as each chat has its own browser state
func (b *Browser) State() *responses.BrowserStateData {
b.state.mu.RLock()
defer b.state.mu.RUnlock()
return b.state.Data
}
func (b *Browser) savePage(page *responses.Page) {
b.state.Data.URLToPage[page.URL] = page
b.state.Data.PageStack = append(b.state.Data.PageStack, page.URL)
}
func (b *Browser) getPageFromStack(url string) (*responses.Page, error) {
page, ok := b.state.Data.URLToPage[url]
if !ok {
return nil, fmt.Errorf("page not found for url %s", url)
}
return page, nil
}
func NewBrowser(state *responses.BrowserStateData) *Browser {
if state == nil {
state = &responses.BrowserStateData{
PageStack: []string{},
ViewTokens: DefaultViewTokens,
URLToPage: make(map[string]*responses.Page),
}
}
b := &BrowserState{
Data: state,
}
return &Browser{
state: b,
}
}
type BrowserSearch struct {
Browser
webSearch *BrowserWebSearch
}
// NewBrowserSearch creates a new browser search instance
func NewBrowserSearch(bb *Browser) *BrowserSearch {
if bb == nil {
bb = &Browser{
state: &BrowserState{
Data: &responses.BrowserStateData{
PageStack: []string{},
ViewTokens: DefaultViewTokens,
URLToPage: make(map[string]*responses.Page),
},
},
}
}
return &BrowserSearch{
Browser: *bb,
webSearch: &BrowserWebSearch{},
}
}
func (b *BrowserSearch) Name() string {
return "browser.search"
}
func (b *BrowserSearch) Description() string {
return "Search the web for information"
}
func (b *BrowserSearch) Prompt() string {
return ""
}
func (b *BrowserSearch) Schema() map[string]any {
return map[string]any{}
}
func (b *BrowserSearch) Execute(ctx context.Context, args map[string]any) (any, string, error) {
query, ok := args["query"].(string)
if !ok {
return nil, "", fmt.Errorf("query parameter is required")
}
topn, ok := args["topn"].(int)
if !ok {
topn = 5
}
searchArgs := map[string]any{
"queries": []any{query},
"max_results": topn,
}
result, err := b.webSearch.Execute(ctx, searchArgs)
if err != nil {
return nil, "", fmt.Errorf("search error: %w", err)
}
searchResponse, ok := result.(*WebSearchResponse)
if !ok {
return nil, "", fmt.Errorf("invalid search results format")
}
// Build main search results page that contains all search results
searchResultsPage := b.buildSearchResultsPageCollection(query, searchResponse)
b.savePage(searchResultsPage)
cursor := len(b.state.Data.PageStack) - 1
// cache result for each page
for _, queryResults := range searchResponse.Results {
for i, result := range queryResults {
resultPage := b.buildSearchResultsPage(&result, i+1)
// save to global only, do not add to visited stack
b.state.Data.URLToPage[resultPage.URL] = resultPage
}
}
page := searchResultsPage
pageText, err := b.displayPage(page, cursor, 0, -1)
if err != nil {
return nil, "", fmt.Errorf("failed to display page: %w", err)
}
return b.state.Data, pageText, nil
}
func (b *Browser) buildSearchResultsPageCollection(query string, results *WebSearchResponse) *responses.Page {
page := &responses.Page{
URL: "search_results_" + query,
Title: query,
Links: make(map[int]string),
FetchedAt: time.Now(),
}
var textBuilder strings.Builder
linkIdx := 0
// Add the header lines to match format
textBuilder.WriteString("\n") // L0: empty
textBuilder.WriteString("URL: \n") // L1: URL: (empty for search)
textBuilder.WriteString("# Search Results\n") // L2: # Search Results
textBuilder.WriteString("\n") // L3: empty
for _, queryResults := range results.Results {
for _, result := range queryResults {
domain := result.URL
if u, err := url.Parse(result.URL); err == nil && u.Host != "" {
domain = u.Host
domain = strings.TrimPrefix(domain, "www.")
}
linkFormat := fmt.Sprintf("* 【%d†%s†%s】", linkIdx, result.Title, domain)
textBuilder.WriteString(linkFormat)
numChars := min(len(result.Content.FullText), 400)
snippet := strings.TrimSpace(result.Content.FullText[:numChars])
textBuilder.WriteString(snippet)
textBuilder.WriteString("\n")
page.Links[linkIdx] = result.URL
linkIdx++
}
}
page.Text = textBuilder.String()
page.Lines = wrapLines(page.Text, 80)
return page
}
func (b *Browser) buildSearchResultsPage(result *WebSearchResult, linkIdx int) *responses.Page {
page := &responses.Page{
URL: result.URL,
Title: result.Title,
Links: make(map[int]string),
FetchedAt: time.Now(),
}
var textBuilder strings.Builder
// Format the individual result page (only used when no full text is available)
linkFormat := fmt.Sprintf("【%d†%s】", linkIdx, result.Title)
textBuilder.WriteString(linkFormat)
textBuilder.WriteString("\n")
textBuilder.WriteString(fmt.Sprintf("URL: %s\n", result.URL))
numChars := min(len(result.Content.FullText), 300)
textBuilder.WriteString(result.Content.FullText[:numChars])
textBuilder.WriteString("\n\n")
// Only store link and snippet if we won't be processing full text later
// (full text processing will handle all links consistently)
if result.Content.FullText == "" {
page.Links[linkIdx] = result.URL
}
// Use full text if available, otherwise use snippet
if result.Content.FullText != "" {
// Prepend the URL line to the full text
page.Text = fmt.Sprintf("URL: %s\n%s", result.URL, result.Content.FullText)
// Process markdown links in the full text
processedText, processedLinks := processMarkdownLinks(page.Text)
page.Text = processedText
page.Links = processedLinks
} else {
page.Text = textBuilder.String()
}
page.Lines = wrapLines(page.Text, 80)
return page
}
// getEndLoc calculates the end location for viewport based on token limits
func (b *Browser) getEndLoc(loc, numLines, totalLines int, lines []string) int {
if numLines <= 0 {
// Auto-calculate based on viewTokens
txt := b.joinLinesWithNumbers(lines[loc:])
// If text is very short, no need to truncate (at least 1 char per token)
if len(txt) > b.state.Data.ViewTokens {
// Simple heuristic: approximate token counting
// Typical token is ~4 characters, but can be up to 128 chars
maxCharsPerToken := 128
// upper bound for text to analyze
upperBound := min((b.state.Data.ViewTokens+1)*maxCharsPerToken, len(txt))
textToAnalyze := txt[:upperBound]
// Simple approximation: count tokens as ~4 chars each
// This is less accurate than tiktoken but more performant
approxTokens := len(textToAnalyze) / 4
if approxTokens > b.state.Data.ViewTokens {
// Find the character position at viewTokens
endIdx := min(b.state.Data.ViewTokens*4, len(txt))
// Count newlines up to that position to get line count
numLines = strings.Count(txt[:endIdx], "\n") + 1
} else {
numLines = totalLines
}
} else {
numLines = totalLines
}
}
return min(loc+numLines, totalLines)
}
// joinLinesWithNumbers creates a string with line numbers, matching Python's join_lines
func (b *Browser) joinLinesWithNumbers(lines []string) string {
var builder strings.Builder
var hadZeroLine bool
for i, line := range lines {
if i == 0 {
builder.WriteString("L0:\n")
hadZeroLine = true
}
if hadZeroLine {
builder.WriteString(fmt.Sprintf("L%d: %s\n", i+1, line))
} else {
builder.WriteString(fmt.Sprintf("L%d: %s\n", i, line))
}
}
return builder.String()
}
// processMarkdownLinks finds all markdown links in the text and replaces them with the special format
// Returns the processed text and a map of link IDs to URLs
func processMarkdownLinks(text string) (string, map[int]string) {
links := make(map[int]string)
// Always start from 0 for consistent numbering across all pages
linkID := 0
// First, handle multi-line markdown links by joining them
// This regex finds markdown links that might be split across lines
multiLinePattern := regexp.MustCompile(`\[([^\]]+)\]\s*\n\s*\(([^)]+)\)`)
text = multiLinePattern.ReplaceAllStringFunc(text, func(match string) string {
// Replace newlines with spaces in the match
cleaned := strings.ReplaceAll(match, "\n", " ")
// Remove extra spaces
cleaned = regexp.MustCompile(`\s+`).ReplaceAllString(cleaned, " ")
return cleaned
})
// Now process all markdown links (including the cleaned multi-line ones)
linkPattern := regexp.MustCompile(`\[([^\]]+)\]\(([^)]+)\)`)
processedText := linkPattern.ReplaceAllStringFunc(text, func(match string) string {
matches := linkPattern.FindStringSubmatch(match)
if len(matches) != 3 {
return match
}
linkText := strings.TrimSpace(matches[1])
linkURL := strings.TrimSpace(matches[2])
// Extract domain from URL
domain := linkURL
if u, err := url.Parse(linkURL); err == nil && u.Host != "" {
domain = u.Host
// Remove www. prefix if present
domain = strings.TrimPrefix(domain, "www.")
}
// Create the formatted link
formatted := fmt.Sprintf("【%d†%s†%s】", linkID, linkText, domain)
// Store the link
links[linkID] = linkURL
linkID++
return formatted
})
return processedText, links
}
func wrapLines(text string, width int) []string {
if width <= 0 {
width = 80
}
lines := strings.Split(text, "\n")
var wrapped []string
for _, line := range lines {
if line == "" {
// Preserve empty lines
wrapped = append(wrapped, "")
} else if len(line) <= width {
wrapped = append(wrapped, line)
} else {
// Word wrapping while preserving whitespace structure
words := strings.Fields(line)
if len(words) == 0 {
// Line with only whitespace
wrapped = append(wrapped, line)
continue
}
currentLine := ""
for _, word := range words {
// Check if adding this word would exceed width
testLine := currentLine
if testLine != "" {
testLine += " "
}
testLine += word
if len(testLine) > width && currentLine != "" {
// Current line would be too long, wrap it
wrapped = append(wrapped, currentLine)
currentLine = word
} else {
// Add word to current line
if currentLine != "" {
currentLine += " "
}
currentLine += word
}
}
// Add any remaining content
if currentLine != "" {
wrapped = append(wrapped, currentLine)
}
}
}
return wrapped
}
// displayPage formats and returns the page display for the model
func (b *Browser) displayPage(page *responses.Page, cursor, loc, numLines int) (string, error) {
totalLines := len(page.Lines)
if loc >= totalLines {
return "", fmt.Errorf("invalid location: %d (max: %d)", loc, totalLines-1)
}
// get viewport end location
endLoc := b.getEndLoc(loc, numLines, totalLines, page.Lines)
var displayBuilder strings.Builder
displayBuilder.WriteString(fmt.Sprintf("[%d] %s", cursor, page.Title))
if page.URL != "" {
displayBuilder.WriteString(fmt.Sprintf("(%s)\n", page.URL))
} else {
displayBuilder.WriteString("\n")
}
displayBuilder.WriteString(fmt.Sprintf("**viewing lines [%d - %d] of %d**\n\n", loc, endLoc-1, totalLines-1))
// Content with line numbers
var hadZeroLine bool
for i := loc; i < endLoc; i++ {
if i == 0 {
displayBuilder.WriteString("L0:\n")
hadZeroLine = true
}
if hadZeroLine {
displayBuilder.WriteString(fmt.Sprintf("L%d: %s\n", i+1, page.Lines[i]))
} else {
displayBuilder.WriteString(fmt.Sprintf("L%d: %s\n", i, page.Lines[i]))
}
}
return displayBuilder.String(), nil
}
type BrowserOpen struct {
Browser
crawlPage *BrowserCrawler
}
func NewBrowserOpen(bb *Browser) *BrowserOpen {
if bb == nil {
bb = &Browser{
state: &BrowserState{
Data: &responses.BrowserStateData{
PageStack: []string{},
ViewTokens: DefaultViewTokens,
URLToPage: make(map[string]*responses.Page),
},
},
}
}
return &BrowserOpen{
Browser: *bb,
crawlPage: &BrowserCrawler{},
}
}
func (b *BrowserOpen) Name() string {
return "browser.open"
}
func (b *BrowserOpen) Description() string {
return "Open a link in the browser"
}
func (b *BrowserOpen) Prompt() string {
return ""
}
func (b *BrowserOpen) Schema() map[string]any {
return map[string]any{}
}
func (b *BrowserOpen) Execute(ctx context.Context, args map[string]any) (any, string, error) {
// Get cursor parameter first
cursor := -1
if c, ok := args["cursor"].(float64); ok {
cursor = int(c)
} else if c, ok := args["cursor"].(int); ok {
cursor = c
}
// Get loc parameter
loc := 0
if l, ok := args["loc"].(float64); ok {
loc = int(l)
} else if l, ok := args["loc"].(int); ok {
loc = l
}
// Get num_lines parameter
numLines := -1
if n, ok := args["num_lines"].(float64); ok {
numLines = int(n)
} else if n, ok := args["num_lines"].(int); ok {
numLines = n
}
// get page from cursor
var page *responses.Page
if cursor >= 0 {
if cursor >= len(b.state.Data.PageStack) {
return nil, "", fmt.Errorf("cursor %d is out of range (pageStack length: %d)", cursor, len(b.state.Data.PageStack))
}
var err error
page, err = b.getPageFromStack(b.state.Data.PageStack[cursor])
if err != nil {
return nil, "", fmt.Errorf("page not found for cursor %d: %w", cursor, err)
}
} else {
// get last page
if len(b.state.Data.PageStack) != 0 {
pageURL := b.state.Data.PageStack[len(b.state.Data.PageStack)-1]
var err error
page, err = b.getPageFromStack(pageURL)
if err != nil {
return nil, "", fmt.Errorf("page not found for cursor %d: %w", cursor, err)
}
}
}
// Try to get id as string (URL) first
if url, ok := args["id"].(string); ok {
// Check if we already have this page cached
if existingPage, ok := b.state.Data.URLToPage[url]; ok {
// Use cached page
b.savePage(existingPage)
// Always update cursor to point to the newly added page
cursor = len(b.state.Data.PageStack) - 1
pageText, err := b.displayPage(existingPage, cursor, loc, numLines)
if err != nil {
return nil, "", fmt.Errorf("failed to display page: %w", err)
}
return b.state.Data, pageText, nil
}
// Page not in cache, need to crawl it
if b.crawlPage == nil {
b.crawlPage = &BrowserCrawler{}
}
crawlResponse, err := b.crawlPage.Execute(ctx, map[string]any{
"urls": []any{url},
"latest": false,
})
if err != nil {
return nil, "", fmt.Errorf("failed to crawl URL %s: %w", url, err)
}
newPage, err := b.buildPageFromCrawlResult(url, crawlResponse)
if err != nil {
return nil, "", fmt.Errorf("failed to build page from crawl result: %w", err)
}
// Need to fall through if first search is directly an open command - no existing page
b.savePage(newPage)
// Always update cursor to point to the newly added page
cursor = len(b.state.Data.PageStack) - 1
pageText, err := b.displayPage(newPage, cursor, loc, numLines)
if err != nil {
return nil, "", fmt.Errorf("failed to display page: %w", err)
}
return b.state.Data, pageText, nil
}
// Try to get id as integer (link ID from current page)
if id, ok := args["id"].(float64); ok {
if page == nil {
return nil, "", fmt.Errorf("no current page to resolve link from")
}
idInt := int(id)
pageURL, ok := page.Links[idInt]
if !ok {
return nil, "", fmt.Errorf("invalid link id %d", idInt)
}
// Check if we have the linked page cached
newPage, ok := b.state.Data.URLToPage[pageURL]
if !ok {
if b.crawlPage == nil {
b.crawlPage = &BrowserCrawler{}
}
crawlResponse, err := b.crawlPage.Execute(ctx, map[string]any{
"urls": []any{pageURL},
"latest": false,
})
if err != nil {
return nil, "", fmt.Errorf("failed to crawl URL %s: %w", pageURL, err)
}
// Create new page from crawl result
newPage, err = b.buildPageFromCrawlResult(pageURL, crawlResponse)
if err != nil {
return nil, "", fmt.Errorf("failed to build page from crawl result: %w", err)
}
}
// Add to history stack regardless of cache status
b.savePage(newPage)
// Always update cursor to point to the newly added page
cursor = len(b.state.Data.PageStack) - 1
pageText, err := b.displayPage(newPage, cursor, loc, numLines)
if err != nil {
return nil, "", fmt.Errorf("failed to display page: %w", err)
}
return b.state.Data, pageText, nil
}
// If no id provided, just display current page
if page == nil {
return nil, "", fmt.Errorf("no current page to display")
}
// Only add to PageStack without updating URLToPage
b.state.Data.PageStack = append(b.state.Data.PageStack, page.URL)
cursor = len(b.state.Data.PageStack) - 1
pageText, err := b.displayPage(page, cursor, loc, numLines)
if err != nil {
return nil, "", fmt.Errorf("failed to display page: %w", err)
}
return b.state.Data, pageText, nil
}
// buildPageFromCrawlResult creates a Page from crawl API results
func (b *Browser) buildPageFromCrawlResult(requestedURL string, crawlResponse *CrawlResponse) (*responses.Page, error) {
// Initialize page with defaults
page := &responses.Page{
URL: requestedURL,
Title: requestedURL,
Text: "",
Links: make(map[int]string),
FetchedAt: time.Now(),
}
// Process crawl results - the API returns results grouped by URL
for url, urlResults := range crawlResponse.Results {
if len(urlResults) > 0 {
// Get the first result for this URL
result := urlResults[0]
// Extract content
if result.Content.FullText != "" {
page.Text = result.Content.FullText
}
// Extract title if available
if result.Title != "" {
page.Title = result.Title
}
// Update URL to the actual URL from results
page.URL = url
// Extract links if available from extras
for i, link := range result.Extras.Links {
if link.Href != "" {
page.Links[i] = link.Href
} else if link.URL != "" {
page.Links[i] = link.URL
}
}
// Only process the first URL's results
break
}
}
// If no text was extracted, set a default message
if page.Text == "" {
page.Text = "No content could be extracted from this page."
} else {
// Prepend the URL line to match Python implementation
page.Text = fmt.Sprintf("URL: %s\n%s", page.URL, page.Text)
}
// Process markdown links in the text
processedText, processedLinks := processMarkdownLinks(page.Text)
page.Text = processedText
page.Links = processedLinks
// Wrap lines for display
page.Lines = wrapLines(page.Text, 80)
return page, nil
}
type BrowserFind struct {
Browser
}
func NewBrowserFind(bb *Browser) *BrowserFind {
return &BrowserFind{
Browser: *bb,
}
}
func (b *BrowserFind) Name() string {
return "browser.find"
}
func (b *BrowserFind) Description() string {
return "Find a term in the browser"
}
func (b *BrowserFind) Prompt() string {
return ""
}
func (b *BrowserFind) Schema() map[string]any {
return map[string]any{}
}
func (b *BrowserFind) Execute(ctx context.Context, args map[string]any) (any, string, error) {
pattern, ok := args["pattern"].(string)
if !ok {
return nil, "", fmt.Errorf("pattern parameter is required")
}
// Get cursor parameter if provided, default to current page
cursor := -1
if c, ok := args["cursor"].(float64); ok {
cursor = int(c)
}
// Get the page to search in
var page *responses.Page
if cursor == -1 {
// Use current page
if len(b.state.Data.PageStack) == 0 {
return nil, "", fmt.Errorf("no pages to search in")
}
var err error
page, err = b.getPageFromStack(b.state.Data.PageStack[len(b.state.Data.PageStack)-1])
if err != nil {
return nil, "", fmt.Errorf("page not found for cursor %d: %w", cursor, err)
}
} else {
// Use specific cursor
if cursor < 0 || cursor >= len(b.state.Data.PageStack) {
return nil, "", fmt.Errorf("cursor %d is out of range [0-%d]", cursor, len(b.state.Data.PageStack)-1)
}
var err error
page, err = b.getPageFromStack(b.state.Data.PageStack[cursor])
if err != nil {
return nil, "", fmt.Errorf("page not found for cursor %d: %w", cursor, err)
}
}
if page == nil {
return nil, "", fmt.Errorf("page not found")
}
// Create find results page
findPage := b.buildFindResultsPage(pattern, page)
// Add the find results page to state
b.savePage(findPage)
newCursor := len(b.state.Data.PageStack) - 1
pageText, err := b.displayPage(findPage, newCursor, 0, -1)
if err != nil {
return nil, "", fmt.Errorf("failed to display page: %w", err)
}
return b.state.Data, pageText, nil
}
func (b *Browser) buildFindResultsPage(pattern string, page *responses.Page) *responses.Page {
findPage := &responses.Page{
Title: fmt.Sprintf("Find results for text: `%s` in `%s`", pattern, page.Title),
Links: make(map[int]string),
FetchedAt: time.Now(),
}
findPage.URL = fmt.Sprintf("find_results_%s", pattern)
var textBuilder strings.Builder
matchIdx := 0
maxResults := 50
numShowLines := 4
patternLower := strings.ToLower(pattern)
// Search through the page lines following the reference algorithm
var resultChunks []string
lineIdx := 0
for lineIdx < len(page.Lines) {
line := page.Lines[lineIdx]
lineLower := strings.ToLower(line)
if !strings.Contains(lineLower, patternLower) {
lineIdx++
continue
}
// Build snippet context
endLine := min(lineIdx+numShowLines, len(page.Lines))
var snippetBuilder strings.Builder
for j := lineIdx; j < endLine; j++ {
snippetBuilder.WriteString(page.Lines[j])
if j < endLine-1 {
snippetBuilder.WriteString("\n")
}
}
snippet := snippetBuilder.String()
// Format the match
linkFormat := fmt.Sprintf("【%d†match at L%d】", matchIdx, lineIdx)
resultChunk := fmt.Sprintf("%s\n%s", linkFormat, snippet)
resultChunks = append(resultChunks, resultChunk)
if len(resultChunks) >= maxResults {
break
}
matchIdx++
lineIdx += numShowLines
}
// Build final display text
if len(resultChunks) > 0 {
textBuilder.WriteString(strings.Join(resultChunks, "\n\n"))
}
if matchIdx == 0 {
findPage.Text = fmt.Sprintf("No `find` results for pattern: `%s`", pattern)
} else {
findPage.Text = textBuilder.String()
}
findPage.Lines = wrapLines(findPage.Text, 80)
return findPage
}

136
app/tools/browser_crawl.go Normal file
View File

@@ -0,0 +1,136 @@
//go:build windows || darwin
package tools
import (
"context"
"encoding/json"
"fmt"
)
// CrawlContent represents the content of a crawled page
type CrawlContent struct {
Snippet string `json:"snippet"`
FullText string `json:"full_text"`
}
// CrawlExtras represents additional data from the crawl API
type CrawlExtras struct {
Links []CrawlLink `json:"links"`
}
// CrawlLink represents a link found on a crawled page
type CrawlLink struct {
URL string `json:"url"`
Href string `json:"href"`
Text string `json:"text"`
}
// CrawlResult represents a single crawl result
type CrawlResult struct {
Title string `json:"title"`
URL string `json:"url"`
Content CrawlContent `json:"content"`
Extras CrawlExtras `json:"extras"`
}
// CrawlResponse represents the complete response from the crawl API
type CrawlResponse struct {
Results map[string][]CrawlResult `json:"results"`
}
// BrowserCrawler tool for crawling web pages using ollama.com crawl API
type BrowserCrawler struct{}
func (g *BrowserCrawler) Name() string {
return "get_webpage"
}
func (g *BrowserCrawler) Description() string {
return "Crawl and extract text content from web pages"
}
func (g *BrowserCrawler) Prompt() string {
return `When you need to read content from web pages, use the get_webpage tool. Simply provide the URLs you want to read and I'll fetch their content for you.
For each URL, I'll extract the main text content in a readable format. If you need to discover links within those pages, set extract_links to true. If the user requires the latest information, set livecrawl to true.
Only use this tool when you need to access current web content. Make sure the URLs are valid and accessible. Do not use this tool for:
- Downloading files or media
- Accessing private/authenticated pages
- Scraping data at high volumes
Always check the returned content to ensure it's relevant before using it in your response.`
}
func (g *BrowserCrawler) Schema() map[string]any {
schemaBytes := []byte(`{
"type": "object",
"properties": {
"urls": {
"type": "array",
"items": {
"type": "string"
},
"description": "List of URLs to crawl and extract content from"
}
},
"required": ["urls"]
}`)
var schema map[string]any
if err := json.Unmarshal(schemaBytes, &schema); err != nil {
return nil
}
return schema
}
func (g *BrowserCrawler) Execute(ctx context.Context, args map[string]any) (*CrawlResponse, error) {
urlsRaw, ok := args["urls"].([]any)
if !ok {
return nil, fmt.Errorf("urls parameter is required and must be an array of strings")
}
urls := make([]string, 0, len(urlsRaw))
for _, u := range urlsRaw {
if urlStr, ok := u.(string); ok {
urls = append(urls, urlStr)
}
}
if len(urls) == 0 {
return nil, fmt.Errorf("at least one URL is required")
}
return g.performWebCrawl(ctx, urls)
}
// performWebCrawl handles the actual HTTP request to ollama.com crawl API
func (g *BrowserCrawler) performWebCrawl(ctx context.Context, urls []string) (*CrawlResponse, error) {
result := &CrawlResponse{Results: make(map[string][]CrawlResult, len(urls))}
for _, targetURL := range urls {
fetchResp, err := performWebFetch(ctx, targetURL)
if err != nil {
return nil, fmt.Errorf("web_fetch failed for %q: %w", targetURL, err)
}
links := make([]CrawlLink, 0, len(fetchResp.Links))
for _, link := range fetchResp.Links {
links = append(links, CrawlLink{URL: link, Href: link})
}
snippet := truncateString(fetchResp.Content, 400)
result.Results[targetURL] = []CrawlResult{{
Title: fetchResp.Title,
URL: targetURL,
Content: CrawlContent{
Snippet: snippet,
FullText: fetchResp.Content,
},
Extras: CrawlExtras{Links: links},
}}
}
return result, nil
}

147
app/tools/browser_test.go Normal file
View File

@@ -0,0 +1,147 @@
//go:build windows || darwin
package tools
import (
"strings"
"testing"
"time"
"github.com/ollama/ollama/app/ui/responses"
)
func makeTestPage(url string) *responses.Page {
return &responses.Page{
URL: url,
Title: "Title " + url,
Text: "Body for " + url,
Lines: []string{"line1", "line2", "line3"},
Links: map[int]string{0: url},
FetchedAt: time.Now(),
}
}
func TestBrowser_Scroll_AppendsOnlyPageStack(t *testing.T) {
b := NewBrowser(&responses.BrowserStateData{PageStack: []string{}, ViewTokens: 1024, URLToPage: map[string]*responses.Page{}})
p1 := makeTestPage("https://example.com/1")
b.savePage(p1)
initialStackLen := len(b.state.Data.PageStack)
initialMapLen := len(b.state.Data.URLToPage)
bo := NewBrowserOpen(b)
// Scroll without id — should push only to PageStack
_, _, err := bo.Execute(t.Context(), map[string]any{"loc": float64(1), "num_lines": float64(1)})
if err != nil {
t.Fatalf("scroll execute failed: %v", err)
}
if got, want := len(b.state.Data.PageStack), initialStackLen+1; got != want {
t.Fatalf("page stack length = %d, want %d", got, want)
}
if got, want := len(b.state.Data.URLToPage), initialMapLen; got != want {
t.Fatalf("url_to_page length changed = %d, want %d", got, want)
}
}
func TestBrowserOpen_UseCacheByURL(t *testing.T) {
b := NewBrowser(&responses.BrowserStateData{PageStack: []string{}, ViewTokens: 1024, URLToPage: map[string]*responses.Page{}})
bo := NewBrowserOpen(b)
p := makeTestPage("https://example.com/cached")
b.state.Data.URLToPage[p.URL] = p
initialStackLen := len(b.state.Data.PageStack)
initialMapLen := len(b.state.Data.URLToPage)
_, _, err := bo.Execute(t.Context(), map[string]any{"id": p.URL})
if err != nil {
t.Fatalf("open cached execute failed: %v", err)
}
if got, want := len(b.state.Data.PageStack), initialStackLen+1; got != want {
t.Fatalf("page stack length = %d, want %d", got, want)
}
if got, want := len(b.state.Data.URLToPage), initialMapLen; got != want {
t.Fatalf("url_to_page length changed = %d, want %d", got, want)
}
}
func TestDisplayPage_InvalidLoc(t *testing.T) {
b := NewBrowser(&responses.BrowserStateData{PageStack: []string{}, ViewTokens: 1024, URLToPage: map[string]*responses.Page{}})
p := makeTestPage("https://example.com/x")
// ensure lines are set
p.Lines = []string{"a", "b"}
_, err := b.displayPage(p, 0, 10, -1)
if err == nil || !strings.Contains(err.Error(), "invalid location") {
t.Fatalf("expected invalid location error, got %v", err)
}
}
func TestBrowserOpen_LinkId_UsesCacheAndAppends(t *testing.T) {
b := NewBrowser(&responses.BrowserStateData{PageStack: []string{}, ViewTokens: 1024, URLToPage: map[string]*responses.Page{}})
// Seed a main page with a link id 0 to a linked URL
main := makeTestPage("https://example.com/main")
linked := makeTestPage("https://example.com/linked")
main.Links = map[int]string{0: linked.URL}
// Save the main page (adds to PageStack and URLToPage)
b.savePage(main)
// Pre-cache the linked page so open by id avoids network
b.state.Data.URLToPage[linked.URL] = linked
initialStackLen := len(b.state.Data.PageStack)
initialMapLen := len(b.state.Data.URLToPage)
bo := NewBrowserOpen(b)
_, _, err := bo.Execute(t.Context(), map[string]any{"id": float64(0)})
if err != nil {
t.Fatalf("open by link id failed: %v", err)
}
if got, want := len(b.state.Data.PageStack), initialStackLen+1; got != want {
t.Fatalf("page stack length = %d, want %d", got, want)
}
if got, want := len(b.state.Data.URLToPage), initialMapLen; got != want {
t.Fatalf("url_to_page length changed = %d, want %d", got, want)
}
if last := b.state.Data.PageStack[len(b.state.Data.PageStack)-1]; last != linked.URL {
t.Fatalf("last page in stack = %s, want %s", last, linked.URL)
}
}
func TestWrapLines_PreserveAndWidth(t *testing.T) {
long := strings.Repeat("word ", 50)
text := "Line1\n\n" + long + "\nLine3"
lines := wrapLines(text, 40)
// Ensure empty line preserved at index 1
if lines[1] != "" {
t.Fatalf("expected preserved empty line at index 1, got %q", lines[1])
}
// All lines should be <= 40 chars
for i, l := range lines {
if len(l) > 40 {
t.Fatalf("line %d exceeds width: %d > 40", i, len(l))
}
}
}
func TestDisplayPage_FormatHeaderAndLines(t *testing.T) {
b := NewBrowser(&responses.BrowserStateData{PageStack: []string{}, ViewTokens: 1024, URLToPage: map[string]*responses.Page{}})
p := &responses.Page{
URL: "https://example.com/x",
Title: "Example",
Lines: []string{"URL: https://example.com/x", "A", "B", "C"},
}
out, err := b.displayPage(p, 3, 0, 2)
if err != nil {
t.Fatalf("displayPage failed: %v", err)
}
if !strings.HasPrefix(out, "[3] Example(") {
t.Fatalf("header not formatted as expected: %q", out)
}
if !strings.Contains(out, "L0:\n") {
t.Fatalf("missing L0 label: %q", out)
}
if !strings.Contains(out, "L1: URL: https://example.com/x\n") || !strings.Contains(out, "L2: A\n") {
t.Fatalf("missing expected line numbers/content: %q", out)
}
}

View File

@@ -0,0 +1,143 @@
//go:build windows || darwin
package tools
import (
"context"
"encoding/json"
"fmt"
"strconv"
"time"
)
// WebSearchContent represents the content of a search result
type WebSearchContent struct {
Snippet string `json:"snippet"`
FullText string `json:"full_text"`
}
// WebSearchMetadata represents metadata for a search result
type WebSearchMetadata struct {
PublishedDate *time.Time `json:"published_date,omitempty"`
}
// WebSearchResult represents a single search result
type WebSearchResult struct {
Title string `json:"title"`
URL string `json:"url"`
Content WebSearchContent `json:"content"`
Metadata WebSearchMetadata `json:"metadata"`
}
// WebSearchResponse represents the complete response from the websearch API
type WebSearchResponse struct {
Results map[string][]WebSearchResult `json:"results"`
}
// BrowserWebSearch tool for searching the web using ollama.com search API
type BrowserWebSearch struct{}
func (w *BrowserWebSearch) Name() string {
return "gpt_oss_web_search"
}
func (w *BrowserWebSearch) Description() string {
return "Search the web for real-time information using ollama.com search API."
}
func (w *BrowserWebSearch) Prompt() string {
return `Use the gpt_oss_web_search tool to search the web.
1. Come up with a list of search queries to get comprehensive information (typically 2-3 related queries work well)
2. Use the gpt_oss_web_search tool with multiple queries to get results organized by query
3. Use the search results to provide current up to date, accurate information
Today's date is ` + time.Now().Format("January 2, 2006") + `
Add "` + time.Now().Format("January 2, 2006") + `" for news queries and ` + strconv.Itoa(time.Now().Year()+1) + ` for other queries that need current information.`
}
func (w *BrowserWebSearch) Schema() map[string]any {
schemaBytes := []byte(`{
"type": "object",
"properties": {
"queries": {
"type": "array",
"items": {
"type": "string"
},
"description": "List of search queries to look up"
},
"max_results": {
"type": "integer",
"description": "Maximum number of results to return per query (default: 2) up to 5",
"default": 2
}
},
"required": ["queries"]
}`)
var schema map[string]any
if err := json.Unmarshal(schemaBytes, &schema); err != nil {
return nil
}
return schema
}
func (w *BrowserWebSearch) Execute(ctx context.Context, args map[string]any) (any, error) {
queriesRaw, ok := args["queries"].([]any)
if !ok {
return nil, fmt.Errorf("queries parameter is required and must be an array of strings")
}
queries := make([]string, 0, len(queriesRaw))
for _, q := range queriesRaw {
if query, ok := q.(string); ok {
queries = append(queries, query)
}
}
if len(queries) == 0 {
return nil, fmt.Errorf("at least one query is required")
}
maxResults := 5
if mr, ok := args["max_results"].(int); ok {
maxResults = mr
}
return w.performWebSearch(ctx, queries, maxResults)
}
// performWebSearch handles the actual HTTP request to ollama.com search API
func (w *BrowserWebSearch) performWebSearch(ctx context.Context, queries []string, maxResults int) (*WebSearchResponse, error) {
response := &WebSearchResponse{Results: make(map[string][]WebSearchResult, len(queries))}
for _, query := range queries {
searchResp, err := performWebSearch(ctx, query, maxResults)
if err != nil {
return nil, fmt.Errorf("web_search failed for %q: %w", query, err)
}
converted := make([]WebSearchResult, 0, len(searchResp.Results))
for _, item := range searchResp.Results {
converted = append(converted, WebSearchResult{
Title: item.Title,
URL: item.URL,
Content: WebSearchContent{
Snippet: truncateString(item.Content, 400),
FullText: item.Content,
},
Metadata: WebSearchMetadata{},
})
}
response.Results[query] = converted
}
return response, nil
}
func truncateString(input string, limit int) string {
if limit <= 0 || len(input) <= limit {
return input
}
return input[:limit]
}

122
app/tools/tools.go Normal file
View File

@@ -0,0 +1,122 @@
//go:build windows || darwin
package tools
import (
"context"
"encoding/json"
"fmt"
)
// Tool defines the interface that all tools must implement
type Tool interface {
// Name returns the unique identifier for the tool
Name() string
// Description returns a human-readable description of what the tool does
Description() string
// Schema returns the JSON schema for the tool's parameters
Schema() map[string]any
// Execute runs the tool with the given arguments and returns result to store in db, and a string result for the model
Execute(ctx context.Context, args map[string]any) (any, string, error)
// Prompt returns a prompt for the tool
Prompt() string
}
// Registry manages the available tools and their execution
type Registry struct {
tools map[string]Tool
workingDir string // Working directory for all tool operations
}
// NewRegistry creates a new tool registry with no tools
func NewRegistry() *Registry {
return &Registry{
tools: make(map[string]Tool),
}
}
// Register adds a tool to the registry
func (r *Registry) Register(tool Tool) {
r.tools[tool.Name()] = tool
}
// Get retrieves a tool by name
func (r *Registry) Get(name string) (Tool, bool) {
tool, exists := r.tools[name]
return tool, exists
}
// List returns all available tools
func (r *Registry) List() []Tool {
tools := make([]Tool, 0, len(r.tools))
for _, tool := range r.tools {
tools = append(tools, tool)
}
return tools
}
// SetWorkingDir sets the working directory for all tool operations
func (r *Registry) SetWorkingDir(dir string) {
r.workingDir = dir
}
// Execute runs a tool with the given name and arguments
func (r *Registry) Execute(ctx context.Context, name string, args map[string]any) (any, string, error) {
tool, ok := r.tools[name]
if !ok {
return nil, "", fmt.Errorf("unknown tool: %s", name)
}
result, text, err := tool.Execute(ctx, args)
if err != nil {
return nil, "", err
}
return result, text, nil
}
// ToolCall represents a request to execute a tool
type ToolCall struct {
ID string `json:"id"`
Type string `json:"type"`
Function ToolFunction `json:"function"`
}
// ToolFunction represents the function call details
type ToolFunction struct {
Name string `json:"name"`
Arguments json.RawMessage `json:"arguments"`
}
// ToolResult represents the result of a tool execution
type ToolResult struct {
ToolCallID string `json:"tool_call_id"`
Content any `json:"content"`
Error string `json:"error,omitempty"`
}
// ToolSchemas returns all tools as schema maps suitable for API calls
func (r *Registry) AvailableTools() []map[string]any {
schemas := make([]map[string]any, 0, len(r.tools))
for _, tool := range r.tools {
schema := map[string]any{
"name": tool.Name(),
"description": tool.Description(),
"schema": tool.Schema(),
}
schemas = append(schemas, schema)
}
return schemas
}
// ToolNames returns a list of all tool names
func (r *Registry) ToolNames() []string {
names := make([]string, 0, len(r.tools))
for name := range r.tools {
names = append(names, name)
}
return names
}

128
app/tools/web_fetch.go Normal file
View File

@@ -0,0 +1,128 @@
//go:build windows || darwin
package tools
import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/http"
"net/url"
"strconv"
"strings"
"time"
"github.com/ollama/ollama/auth"
)
type WebFetch struct{}
type FetchRequest struct {
URL string `json:"url"`
}
type FetchResponse struct {
Title string `json:"title"`
Content string `json:"content"`
Links []string `json:"links"`
}
func (w *WebFetch) Name() string {
return "web_fetch"
}
func (w *WebFetch) Description() string {
return "Crawl and extract text content from web pages"
}
func (g *WebFetch) Schema() map[string]any {
schemaBytes := []byte(`{
"type": "object",
"properties": {
"url": {
"type": "string",
"description": "URL to crawl and extract content from"
}
},
"required": ["url"]
}`)
var schema map[string]any
if err := json.Unmarshal(schemaBytes, &schema); err != nil {
return nil
}
return schema
}
func (w *WebFetch) Prompt() string {
return ""
}
func (w *WebFetch) Execute(ctx context.Context, args map[string]any) (any, string, error) {
urlRaw, ok := args["url"]
if !ok {
return nil, "", fmt.Errorf("url parameter is required")
}
urlStr, ok := urlRaw.(string)
if !ok || strings.TrimSpace(urlStr) == "" {
return nil, "", fmt.Errorf("url must be a non-empty string")
}
result, err := performWebFetch(ctx, urlStr)
if err != nil {
return nil, "", err
}
return result, "", nil
}
func performWebFetch(ctx context.Context, targetURL string) (*FetchResponse, error) {
reqBody := FetchRequest{URL: targetURL}
jsonBody, err := json.Marshal(reqBody)
if err != nil {
return nil, fmt.Errorf("failed to marshal request body: %w", err)
}
crawlURL, err := url.Parse("https://ollama.com/api/web_fetch")
if err != nil {
return nil, fmt.Errorf("failed to parse fetch URL: %w", err)
}
query := crawlURL.Query()
query.Add("ts", strconv.FormatInt(time.Now().Unix(), 10))
crawlURL.RawQuery = query.Encode()
data := fmt.Appendf(nil, "%s,%s", http.MethodPost, crawlURL.RequestURI())
signature, err := auth.Sign(ctx, data)
if err != nil {
return nil, fmt.Errorf("failed to sign request: %w", err)
}
req, err := http.NewRequestWithContext(ctx, http.MethodPost, crawlURL.String(), bytes.NewBuffer(jsonBody))
if err != nil {
return nil, fmt.Errorf("failed to create request: %w", err)
}
req.Header.Set("Content-Type", "application/json")
if signature != "" {
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", signature))
}
client := &http.Client{Timeout: 30 * time.Second}
resp, err := client.Do(req)
if err != nil {
return nil, fmt.Errorf("failed to execute fetch request: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("fetch API error (status %d)", resp.StatusCode)
}
var result FetchResponse
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return nil, fmt.Errorf("failed to decode response: %w", err)
}
return &result, nil
}

145
app/tools/web_search.go Normal file
View File

@@ -0,0 +1,145 @@
//go:build windows || darwin
package tools
import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/http"
"net/url"
"strconv"
"strings"
"time"
"github.com/ollama/ollama/auth"
)
type WebSearch struct{}
type SearchRequest struct {
Query string `json:"query"`
MaxResults int `json:"max_results,omitempty"`
}
type SearchResult struct {
Title string `json:"title"`
URL string `json:"url"`
Content string `json:"content"`
}
type SearchResponse struct {
Results []SearchResult `json:"results"`
}
func (w *WebSearch) Name() string {
return "web_search"
}
func (w *WebSearch) Description() string {
return "Search the web for real-time information using ollama.com web search API."
}
func (w *WebSearch) Prompt() string {
return ""
}
func (g *WebSearch) Schema() map[string]any {
schemaBytes := []byte(`{
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query to execute"
},
"max_results": {
"type": "integer",
"description": "Maximum number of search results to return",
"default": 3
}
},
"required": ["query"]
}`)
var schema map[string]any
if err := json.Unmarshal(schemaBytes, &schema); err != nil {
return nil
}
return schema
}
func (w *WebSearch) Execute(ctx context.Context, args map[string]any) (any, string, error) {
rawQuery, ok := args["query"]
if !ok {
return nil, "", fmt.Errorf("query parameter is required")
}
queryStr, ok := rawQuery.(string)
if !ok || strings.TrimSpace(queryStr) == "" {
return nil, "", fmt.Errorf("query must be a non-empty string")
}
maxResults := 5
if v, ok := args["max_results"].(float64); ok && int(v) > 0 {
maxResults = int(v)
}
result, err := performWebSearch(ctx, queryStr, maxResults)
if err != nil {
return nil, "", err
}
return result, "", nil
}
func performWebSearch(ctx context.Context, query string, maxResults int) (*SearchResponse, error) {
reqBody := SearchRequest{Query: query, MaxResults: maxResults}
jsonBody, err := json.Marshal(reqBody)
if err != nil {
return nil, fmt.Errorf("failed to marshal request body: %w", err)
}
searchURL, err := url.Parse("https://ollama.com/api/web_search")
if err != nil {
return nil, fmt.Errorf("failed to parse search URL: %w", err)
}
q := searchURL.Query()
q.Add("ts", strconv.FormatInt(time.Now().Unix(), 10))
searchURL.RawQuery = q.Encode()
data := fmt.Appendf(nil, "%s,%s", http.MethodPost, searchURL.RequestURI())
signature, err := auth.Sign(ctx, data)
if err != nil {
return nil, fmt.Errorf("failed to sign request: %w", err)
}
req, err := http.NewRequestWithContext(ctx, http.MethodPost, searchURL.String(), bytes.NewBuffer(jsonBody))
if err != nil {
return nil, fmt.Errorf("failed to create request: %w", err)
}
req.Header.Set("Content-Type", "application/json")
if signature != "" {
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", signature))
}
client := &http.Client{Timeout: 10 * time.Second}
resp, err := client.Do(req)
if err != nil {
return nil, fmt.Errorf("failed to execute search request: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("search API error (status %d)", resp.StatusCode)
}
var result SearchResponse
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return nil, fmt.Errorf("failed to decode response: %w", err)
}
return &result, nil
}

View File

@@ -1,24 +0,0 @@
package commontray
var (
Title = "Ollama"
ToolTip = "Ollama"
UpdateIconName = "tray_upgrade"
IconName = "tray"
)
type Callbacks struct {
Quit chan struct{}
Update chan struct{}
DoFirstUse chan struct{}
ShowLogs chan struct{}
}
type OllamaTray interface {
GetCallbacks() Callbacks
Run()
UpdateAvailable(ver string) error
DisplayFirstUseNotification() error
Quit()
}

View File

@@ -1,28 +0,0 @@
package tray
import (
"fmt"
"runtime"
"github.com/ollama/ollama/app/assets"
"github.com/ollama/ollama/app/tray/commontray"
)
func NewTray() (commontray.OllamaTray, error) {
extension := ".png"
if runtime.GOOS == "windows" {
extension = ".ico"
}
iconName := commontray.UpdateIconName + extension
updateIcon, err := assets.GetIcon(iconName)
if err != nil {
return nil, fmt.Errorf("failed to load icon %s: %w", iconName, err)
}
iconName = commontray.IconName + extension
icon, err := assets.GetIcon(iconName)
if err != nil {
return nil, fmt.Errorf("failed to load icon %s: %w", iconName, err)
}
return InitPlatformTray(icon, updateIcon)
}

View File

@@ -1,13 +0,0 @@
//go:build !windows
package tray
import (
"errors"
"github.com/ollama/ollama/app/tray/commontray"
)
func InitPlatformTray(icon, updateIcon []byte) (commontray.OllamaTray, error) {
return nil, errors.New("not implemented")
}

Some files were not shown because too many files have changed in this diff Show More