It is the preferred way. The quality (/precision) is a bit lower with one of the smaller distilled DeelSeek-R1 model versions, but completely save.
Several ways, but most simple: Install Ollama (https://ollama.com/) and pick a model that fits. A proper GPU and lots of VRAM helps.
https://ollama.com/library/deepseek-r1
Several ways, but most simple: Install Ollama (https://ollama.com/) and pick a model that fits. A proper GPU and lots of VRAM helps.
https://ollama.com/library/deepseek-r1
Comments
https://chat.deepseek.com/downloads/DeepSeek%20Privacy%20Policy.html
For inference you can use the default or https://chatboxai.app/en or https://github.com/gluonfield/enchanted or - what many like - https://openwebui.com etc.
Some config for a home network:
https://github.com/ollama/ollama/issues/703
To use that given model, try something like:
ollama run https://hf.co/bartowski/deepseek-r1-qwen-2.5-32B-ablated-GGUF:Q5_K_M