This website requires JavaScript.
Explore
Help
Sign In
mirrors
/
ollama-for-amd
Watch
1
Star
0
Fork
0
You've already forked ollama-for-amd
mirror of
https://github.com/likelovewant/ollama-for-amd.git
synced
2025-12-22 14:53:56 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
cb485b20194e0fb28709b6b83ffdc3726282e9a7
ollama-for-amd
/
llama
/
patches
/
0036-ggml-cuda-skip-large-batches.patch
Michael Yang
0796d79d19
cuda: skip large batches
...
cuda panics on batches larger than 1024 so skip those and fallback to cpu
2025-11-18 16:11:37 -08:00
1.1 KiB
Raw
Blame
History
View Raw
Reference in New Issue
View Git Blame
Copy Permalink