Logo
Explore Help
Sign In
mirrors/ollama-for-amd
1
0
Fork 0
You've already forked ollama-for-amd
mirror of https://github.com/likelovewant/ollama-for-amd.git synced 2025-12-23 23:18:26 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
4bd9b97d7b73a81f24cc33dd40352cb5e343c599
ollama-for-amd/llm
History
likelovewant 4bd9b97d7b Merge branch 'ollama:main' into main
2024-05-07 14:47:50 +08:00
..
ext_server
omit prompt and generate settings from final response
2024-05-03 17:00:02 -07:00
generate
Update gen_windows.ps1
2024-05-06 19:43:34 +08:00
llama.cpp @ 952d03dbea
update llama.cpp commit to 952d03d
2024-04-30 17:31:20 -04:00
patches
Fix llava models not working after first request (#4164)
2024-05-05 20:50:31 -07:00
ggla.go
refactor tensor query
2024-04-10 11:37:20 -07:00
ggml.go
fix: mixtral graph
2024-04-22 17:19:44 -07:00
gguf.go
fixes for gguf (#3863)
2024-04-23 20:57:20 -07:00
llm_darwin_amd64.go
Switch back to subprocessing for llama.cpp
2024-04-01 16:48:18 -07:00
llm_darwin_arm64.go
Switch back to subprocessing for llama.cpp
2024-04-01 16:48:18 -07:00
llm_linux.go
Switch back to subprocessing for llama.cpp
2024-04-01 16:48:18 -07:00
llm_windows.go
Move nested payloads to installer and zip file on windows
2024-04-23 16:14:47 -07:00
llm.go
Add import declaration for windows,arm64 to llm.go
2024-04-26 23:23:53 -06:00
memory.go
Centralize server config handling
2024-05-05 16:49:50 -07:00
payload.go
Move nested payloads to installer and zip file on windows
2024-04-23 16:14:47 -07:00
server.go
Use our libraries first
2024-05-06 14:23:29 -07:00
status.go
Switch back to subprocessing for llama.cpp
2024-04-01 16:48:18 -07:00
Powered by Gitea Version: 1.25.3 Page: 59ms Template: 5ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API