This website requires JavaScript.
Explore
Help
Sign In
mirrors
/
ollama-for-amd
Watch
1
Star
0
Fork
0
You've already forked ollama-for-amd
mirror of
https://github.com/likelovewant/ollama-for-amd.git
synced
2025-12-22 14:53:56 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
9fc3bba9cf79d92d44f75e532310034585d23bc8
ollama-for-amd
/
llm
History
Bruce MacDonald
86279f4ae3
unbound max num gpu layers (
#591
)
...
--------- Co-authored-by: Michael Yang <
mxyng@pm.me
>
2023-09-25 18:36:46 -04:00
..
llama.cpp
silence warm up log
2023-09-21 14:53:33 -07:00
falcon.go
fix: add falcon.go
2023-09-13 14:47:37 -07:00
ggml.go
unbound max num gpu layers (
#591
)
2023-09-25 18:36:46 -04:00
gguf.go
unbound max num gpu layers (
#591
)
2023-09-25 18:36:46 -04:00
llama.go
unbound max num gpu layers (
#591
)
2023-09-25 18:36:46 -04:00
llm.go
unbound max num gpu layers (
#591
)
2023-09-25 18:36:46 -04:00
utils.go
partial decode ggml bin for more info
2023-08-10 09:23:10 -07:00