mirror of
https://github.com/likelovewant/ollama-for-amd.git
synced 2025-12-22 14:53:56 +00:00
Similar to the llama engine, quantizing the KV cache requires flash attention to be enabled through the Ollama server.
24 KiB
24 KiB