
Learn how to run Llama 3 entirely on your own hardware. By eliminating cloud providers and external API keys, you maintain absolute control over your data, eliminate recurring costs, and protect your privacy using the highest security standards.
Why go local? Using cloud-based AI carries the risk of sensitive prompts being used for training or exposed via data breaches. With Ollama + OpenClaw, your data never leaves your server.
| Prompt | Input / Selection |
|---|---|
| Configure model/auth? | Yes |
| Select AI Provider | Ollama (or Custom) |
| Model Name | llama3-agent |
| Base URL | http://127.0.0.1:11434 |
| API Key | leave blank |
Llama 3 is now successfully integrated with 16k context. If you encounter memory issues, try restarting the Ollama service.
Next Step: Security