Originally, llamaServer represented old memory estimates, which
could be used with either the old or new engine. ollamaServer was
used only for the new estimates and new engine. Since these
implementations did not map directly to engine, there was engine-
specific code in common code paths.
Now that new estimates are always used for the new engine, there is
a direct mapping between server type and engine. This separates out
most of the engine-specific code into the correct implementation
to make things easier to understand.
Currently for both the old and new engines, there is code to
calculate how much memory is required for a model and lay out
the layers onto GPUs. This reuses the new engine's lay out code
for the old engine as well, bringing them closer together. The
old engine continues to use its current method of estimating
required memory.
This reduces maintainence effort and improves consistency, as new
features only need to be implemented in one place. The newer code
is also more accurate, especially with multiple GPUs.
We used to control the way that llama.cpp saw devices using
CUDA_VISIBLE_DEVICES or similar. This would ensure that the layers
offloaded to a device were actually the ones intended. This is
particularly important because we might reorder devices based on
free memory or performance.
When we started explicitly scheduling layers, this logic went
away but the llamarunner didn't have any way to set the correct
order of devices. This meant that the correct number of layers
would be assigned to a device but not necessarily the layers
that were expected. This change sets up the devices correctly
based on the offload information.
Adds logprobs support to Ollama's API including support for Ollama's
OpenAI-compatible API. By specifying the new 'logprobs' boolean parameter
in the API, Ollama will return the log probabilities for each token generated.
'top_logprobs', an integer value can also be specified up to the value 20.
When specified, the API will also provide the number of most likely tokens to
return at each token position
Co-authored-by: Baptiste Jamin <baptiste@crisp.chat>
* doc: re-add login autostart faq
This appears to have been accidentally dropped during the doc migration.
* docs: GPU updates lost on the doc update
* review comments: improve windows login disable instructions
Co-authored-by: A-Akhil <akhilrahul70@gmail.com>
This PR introduces a new ollama embed command that allows users to generate embeddings directly from the command line.
Added ollama embed MODEL [TEXT...] command for generating text embeddings
Supports both direct text arguments and stdin piping for scripted workflows
Outputs embeddings as JSON arrays (one per line)
The scheduler updates free VRAM based on current loaded models. This was
mutating the persisted list of GPUs, and when coupled with the non-refreshing
logic for Metal that lead to stale low VRAM reporting after unload. The fix is
to make sure the GPU discovery always returns a copy so the schedulers GPU list
is in fact ephemeral and doesn't leak any temporary adjustments back into the
persistent list.
The behavior change in 0.12.4 is the most likely the root cause of hangs some
users are seeing. This reverts to the 0.12.3 code, with some added trace
logging.
* discovery: only retry AMD GPUs
CUDA and Vulkan don't crash on unsupported devices, so retry isn't necessary.
This also refactors the code to shift the Library specific logic into the ml
package.
* review comments
* PDH free memory skeleton
* Add PDH printing
* Add LUID support for Vulkan
* wire luid from ggml-vulkan to mem-dxgi-pdh file
* Fix to ggml-impl
* Continue skeleton
* Implemented ggml_dxgi_pdh_get_device_memory
* fix comments
* Fix - change value GB to bytes
* add ifdefs to only support windows and not linux
* modify error codes
* Finished ggml_dxgi_pdh_init() function
* completed ggml_dxgi_pdh_release()
* Formatting changes, add static to functions
* fix build errors
* fix go build error
* fix luid - now should match between dxgi and vulkan
* Fix the free memory reporting (was using copy by value, change to reference)
* keep only dxgi1_2.h
* Modifications based on PR feedback
* fix merge conflicts (2) and fix desc1.description printout
* move dxgi + pdh api calls to before the vendor specific library calls
* change from 3 samples to 1 sample for PDH
* modify when old_mode is set
* add fix for building MacOS
* fix release and returns for other vendors
* add patch file
* app: add code for macOS and Windows apps under 'app'
* app: add readme
* app: windows and linux only for now
* ci: fix ui CI validation
---------
Co-authored-by: jmorganca <jmorganca@gmail.com>
The initial implementation of qwen3-vl:235b exceeded the maximum graph
size based on the number of tensors. Although this was later fixed
through the use of the mrope operation, we are close to the limit in
some cases. This updates to track the current llama.cpp usage of GGML.