This website requires JavaScript.
Explore
Help
Sign In
mirrors
/
ollama-for-amd
Watch
1
Star
0
Fork
0
You've already forked ollama-for-amd
mirror of
https://github.com/likelovewant/ollama-for-amd.git
synced
2025-12-23 15:08:27 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
dba62ff3a572af4af845711c2091b70606b06af4
ollama-for-amd
/
llama
/
patches
/
0035-vulkan-Fix-crash-when-FP16-mul_mat-accumulation-is-n.patch
Michael Yang
0796d79d19
cuda: skip large batches
...
cuda panics on batches larger than 1024 so skip those and fallback to cpu
2025-11-18 16:11:37 -08:00
3.2 KiB
Raw
Blame
History
View Raw
Reference in New Issue
View Git Blame
Copy Permalink