This website requires JavaScript.
Explore
Help
Sign In
mirrors
/
ollama-for-amd
Watch
1
Star
0
Fork
0
You've already forked ollama-for-amd
mirror of
https://github.com/likelovewant/ollama-for-amd.git
synced
2025-12-24 15:38:27 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
9c5bf342bc34f94de9aa4a171d726e6b341a91e6
ollama-for-amd
/
ml
/
backend
/
ggml
/
ggml
History
Daniel Hiltgen
0cc90a8186
harden uncaught exception registration (
#12120
)
2025-09-02 09:43:55 -07:00
..
cmake
update vendored llama.cpp and ggml (
#11823
)
2025-08-14 14:42:58 -07:00
include
ggml: Avoid allocating CUDA primary context on unused GPUs
2025-08-27 16:24:18 -07:00
src
harden uncaught exception registration (
#12120
)
2025-09-02 09:43:55 -07:00
.rsync-filter
update vendored llama.cpp and ggml (
#11823
)
2025-08-14 14:42:58 -07:00
LICENSE
next build (
#8539
)
2025-01-29 15:03:38 -08:00