mirror of
https://github.com/likelovewant/ollama-for-amd.git
synced 2025-12-23 07:03:57 +00:00
Update GGML to b6646 (#12245)
Notable EOLs with this change: - MacOS v12 and v13 are no longer supported (v14+ required) - AMD gfx900 and gfx906 are no longer supported
This commit is contained in:
6357
llama/llama.cpp/vendor/miniaudio/miniaudio.h
vendored
6357
llama/llama.cpp/vendor/miniaudio/miniaudio.h
vendored
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user