Logo
Explore Help
Sign In
mirrors/ollama-for-amd
1
0
Fork 0
You've already forked ollama-for-amd
mirror of https://github.com/likelovewant/ollama-for-amd.git synced 2025-12-23 07:03:57 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
f810ec741c5563a77d05719a1459b885aa641b27
ollama-for-amd/model/models
History
Michael Yang 5994e8e8fd embedding gemma model (#12181)
* ollama: add embeddings
2025-09-04 09:09:07 -07:00
..
gemma2
ml: Panic rather than return error on tensor allocation failure
2025-05-22 14:38:09 -07:00
gemma3
embedding gemma model (#12181)
2025-09-04 09:09:07 -07:00
gemma3n
Increase performance for Gemma3n models on NVGPUs by enabling CUDA Graph execution (#11525)
2025-07-29 12:37:06 -07:00
gptoss
update vendored llama.cpp and ggml (#11823)
2025-08-14 14:42:58 -07:00
llama
Only load supported models on new engine (#11362)
2025-07-11 12:21:54 -07:00
llama4
perf: build graph for next batch async to keep GPU busy (#11863)
2025-08-29 14:20:28 -07:00
mistral3
perf: build graph for next batch async to keep GPU busy (#11863)
2025-08-29 14:20:28 -07:00
mllama
perf: build graph for next batch async to keep GPU busy (#11863)
2025-08-29 14:20:28 -07:00
qwen2
Only load supported models on new engine (#11362)
2025-07-11 12:21:54 -07:00
qwen3
use nn.Linear in place of ml.Tensor (#11049)
2025-06-11 12:10:15 -07:00
qwen25vl
perf: build graph for next batch async to keep GPU busy (#11863)
2025-08-29 14:20:28 -07:00
models.go
gpt-oss (#11672)
2025-08-05 12:21:16 -07:00
Powered by Gitea Version: 1.25.3 Page: 50ms Template: 5ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API