Add cgo implementation for llama.cpp

Run the server.cpp directly inside the Go runtime via cgo
while retaining the LLM Go abstractions.
This commit is contained in:
Daniel Hiltgen
2023-11-13 17:20:34 -08:00
parent 5e7fd6906f
commit d4cd695759
27 changed files with 1189 additions and 765 deletions

3
.gitignore vendored
View File

@@ -8,4 +8,5 @@ ollama
ggml-metal.metal
.cache
*.exe
.idea
.idea
test_data