OpenClaw Logo

Local Llama 3 Integration & Data Sovereignty

Learn how to run Llama 3 entirely on your own hardware. By eliminating cloud providers and external API keys, you maintain absolute control over your data, eliminate recurring costs, and protect your privacy using the highest security standards.

Why go local? Using cloud-based AI carries the risk of sensitive prompts being used for training or exposed via data breaches. With Ollama + OpenClaw, your data never leaves your server.

Step 1

Install Ollama Engine

Install Ollama directly using the official script:
Server Terminal
curl -fsSL https://ollama.com/install.sh | sh
Download the base model:
Server Terminal
ollama pull llama3:8b
Step 2

Increase Context Limit (Mandatory)

OpenClaw requires at least 16,000 tokens of context. Since Llama 3 defaults to 8,192, we must create a customized version:
1. Create Modelfile
nano Modelfile
FROM llama3:8b PARAMETER num_ctx 16384
2. Generate the model
Server Terminal
ollama create llama3-agent -f Modelfile
Step 3

Register Agent in OpenClaw

Add a new agent and use the following interactive settings:
Server Terminal
openclaw agents add main
PromptInput / Selection
Configure model/auth?Yes
Select AI ProviderOllama (or Custom)
Model Namellama3-agent
Base URLhttp://127.0.0.1:11434
API Keyleave blank
Step 4

Launch Gateway

Restart the service to load the new configuration:
Gateway Force Start
openclaw gateway --force
Open the TUI for your first chat test:
Chat Interface
openclaw tui

Setup Complete

Llama 3 is now successfully integrated with 16k context. If you encounter memory issues, try restarting the Ollama service.

Next Step: Security