mirror of
https://github.com/likelovewant/ollama-for-amd.git
synced 2025-12-25 07:58:01 +00:00
Update GGML to b6646 (#12245)
Notable EOLs with this change: - MacOS v12 and v13 are no longer supported (v14+ required) - AMD gfx900 and gfx906 are no longer supported
This commit is contained in:
2020
llama/llama.cpp/src/llama-kv-cache.cpp
vendored
Normal file
2020
llama/llama.cpp/src/llama-kv-cache.cpp
vendored
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user