mirror of
https://github.com/likelovewant/ollama-for-amd.git
synced 2025-12-21 22:33:56 +00:00
39b289c6a80c917d627eee91a240840951a1b698
Ollama
Ollama is a tool for running large language models. It's designed to be easy to use and fast.
Note: this project is a work in progress. Certain models that can be run with
ollamaare intended for research and/or non-commercial use only.
Install
Using pip:
pip install ollama
Using docker:
docker run ollama/ollama
Quickstart
To run a model, use ollama run:
ollama run orca-mini-3b
You can also run models from hugging face:
ollama run huggingface.co/TheBloke/orca_mini_3B-GGML
Or directly via downloaded model files:
ollama run ~/Downloads/orca-mini-13b.ggmlv3.q4_0.bin
Documentation
Languages
Go
83.7%
TypeScript
9.2%
C++
3.2%
Objective-C
1.4%
Shell
1.1%
Other
1.2%