Files
ollama-for-amd/docs/integrations/n8n.mdx
2025-11-07 20:07:26 -08:00

69 lines
1.8 KiB
Plaintext

---
title: n8n
---
## Install
Install [n8n](https://docs.n8n.io/choose-n8n/).
## Using Ollama Locally
1. In the top right corner, click the dropdown and select **Create Credential**
<div style={{ display: 'flex', justifyContent: 'center' }}>
<img
src="/images/n8n-credential-creation.png"
alt="Create a n8n Credential"
width="75%"
/>
</div>
2. Under **Add new credential** select **Ollama**
<div style={{ display: 'flex', justifyContent: 'center' }}>
<img
src="/images/n8n-ollama-form.png"
alt="Select Ollama under Credential"
width="75%"
/>
</div>
3. Confirm Base URL is set to `http://localhost:11434` if running locally or `http://host.docker.internal:11434` if running through docker and click **Save**
<Note>
In environments that don't use Docker Desktop (ie, Linux server installations), `host.docker.internal` is not automatically added.
Run n8n in docker with `--add-host=host.docker.internal:host-gateway`
or add the following to a docker compose file:
```yaml
extra_hosts:
- "host.docker.internal:host-gateway"
```
</Note>
You should see a `Connection tested successfully` message.
4. When creating a new workflow, select **Add a first step** and select an **Ollama node**
<div style={{ display: 'flex', justifyContent: 'center' }}>
<img
src="/images/n8n-chat-node.png"
alt="Add a first step with Ollama node"
width="75%"
/>
</div>
5. Select your model of choice (e.g. `qwen3-coder`)
<div style={{ display: 'flex', justifyContent: 'center' }}>
<img
src="/images/n8n-models.png"
alt="Set up Ollama credentials"
width="75%"
/>
</div>
## Connecting to ollama.com
1. Create an [API key](https://ollama.com/settings/keys) on **ollama.com**.
2. In n8n, click **Create Credential** and select **Ollama**
4. Set the **API URL** to `https://ollama.com`
5. Enter your **API Key** and click **Save**