How to Run DeepSeek R1 Locally with OpenClaw
Learn how to run the powerful DeepSeek R1 model locally on your machine using OpenClaw and Ollama. Privacy, speed, and zero cost.
Quick Answer
You can run DeepSeek R1 locally with OpenClaw by installing Ollama, pulling the `deepseek-r1` model, and configuring OpenClaw to use the Ollama provider. This gives you a powerful, private AI agent with no API costs.
DeepSeek R1 has taken the AI world by storm. Its performance rivals top-tier proprietary models like GPT-4o and Claude 3.5 Sonnet, yet itâs open weights and can run on your own hardware.
When you combine DeepSeek R1 with OpenClaw, you get the holy grail of personal AI: a highly intelligent, fully autonomous agent that runs entirely on your machine, with zero API costs and total privacy.
Hereâs how to set it up.
Prerequisites
- Hardware: A Mac with Apple Silicon (M1/M2/M3) or a PC with an NVIDIA GPU (8GB+ VRAM recommended for 7B/8B models).
- Software: OpenClaw installed (guide).
Step 1: Install Ollama
Ollama is the easiest way to run local LLMs. If you havenât already:
- Download Ollama from ollama.com.
- Install and run it.
Step 2: Pull DeepSeek R1
Open your terminal and pull the DeepSeek model. The âdistillâ versions are great for most consumer hardware:
# For 8GB RAM/VRAM (Fast, good reasoning)
ollama pull deepseek-r1:7b
# For 16GB+ RAM/VRAM (Better reasoning)
ollama pull deepseek-r1:14b
# For 32GB+ RAM/VRAM (Excellent reasoning)
ollama pull deepseek-r1:32b
Step 3: Configure OpenClaw
Run the OpenClaw configuration wizard:
openclaw config
Select Ollama as your provider, and enter the model name you just pulled (e.g., deepseek-r1:7b).
Or, edit your config file directly (~/.openclaw/config.json):
{
"llm": {
"provider": "ollama",
"model": "deepseek-r1:7b",
"baseUrl": "http://localhost:11434"
}
}
Why DeepSeek + OpenClaw?
1. Cost
Running OpenClaw with DeepSeek is free. You donât pay per token. You can leave your agent running 24/7 to monitor your email or Discord without worrying about a surprise bill.
2. Privacy
DeepSeek R1 running via Ollama never sends your data to the cloud. OpenClaw processes everything locally. Your calendar, emails, and files stay on your disk.
3. âReasoningâ Capabilities
DeepSeek R1 is a âreasoningâ model (like OpenAI o1). It âthinksâ before it answers, making it exceptionally good at complex tasks like:
- Code Audit: Ask OpenClaw to review a local git repo.
- Data Extraction: Have it parse messy PDFs or websites.
- Planning: Ask it to plan a travel itinerary based on your calendar constraints.
Performance Tips
- Context Window: DeepSeek supports large contexts. OpenClaw automatically manages context, but larger windows require more RAM.
- System Prompt: OpenClawâs default system prompt works well, but for DeepSeek, you might want to encourage âchain of thoughtâ by adding âLetâs think step by stepâ to your custom instructions.
Conclusion
The combination of OpenClawâs tools (browser, file system, apps) and DeepSeekâs intelligence is powerful. You have a free, private employee that works tirelessly on your machine.
Ready to try?
npm i -g openclaw
openclaw onboard > Related Articles
How to Run DeepSeek R1 Locally with OpenClaw
Learn how to run the powerful DeepSeek R1 model locally on your machine using OpenClaw and Ollama. Privacy, speed, and zero cost.
CĂłmo ejecutar DeepSeek R1 localmente con OpenClaw
Aprenda a ejecutar el potente modelo DeepSeek R1 localmente en su mĂĄquina usando OpenClaw y Ollama. Privacidad, rapidez y coste cero.
Como executar DeepSeek R1 localmente com OpenClaw
Aprenda como executar o poderoso modelo DeepSeek R1 localmente em sua mĂĄquina usando OpenClaw e Ollama. Privacidade, velocidade e custo zero.
Need help?
Join the OpenClaw community on Discord for support, tips, and shared skills.
Join Discord →