Logo
Explore Help
Sign In
mirrors/ollama-for-amd
1
0
Fork 0
You've already forked ollama-for-amd
mirror of https://github.com/likelovewant/ollama-for-amd.git synced 2025-12-21 22:33:56 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
8c4022b06b4ff593439ed8ef29e8362f3844172c
ollama-for-amd/llm
History
Purinda Gunasekara be61a81758 main-gpu argument is not getting passed to llamacpp, fixed. (#1192)
2023-11-20 10:52:52 -05:00
..
llama.cpp
enable cpu instructions on intel macs
2023-11-19 23:20:26 -05:00
falcon.go
starcoder
2023-10-02 19:56:51 -07:00
ggml.go
ggufv3
2023-10-23 09:35:49 -07:00
gguf.go
instead of static number of parameters for each model family, get the real number from the tensors (#1022)
2023-11-08 17:55:46 -08:00
llama.go
main-gpu argument is not getting passed to llamacpp, fixed. (#1192)
2023-11-20 10:52:52 -05:00
llm.go
JSON mode: add `"format" as an api parameter (#1051)
2023-11-09 16:44:02 -08:00
starcoder.go
starcoder
2023-10-02 19:56:51 -07:00
utils.go
partial decode ggml bin for more info
2023-08-10 09:23:10 -07:00
Powered by Gitea Version: 1.25.3 Page: 63ms Template: 4ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API