Could one theoretically install DeepSeek onto a home server, thereby having a completely private LLM for themselves?
Oh the possibilities (if so). Future seems interesting from those who’ve invested in having a smart home.
Oh the possibilities (if so). Future seems interesting from those who’ve invested in having a smart home.
Comments
Several ways, but most simple: Install Ollama (https://ollama.com/) and pick a model that fits. A proper GPU and lots of VRAM helps.
https://ollama.com/library/deepseek-r1
https://chat.deepseek.com/downloads/DeepSeek%20Privacy%20Policy.html
For inference you can use the default or https://chatboxai.app/en or https://github.com/gluonfield/enchanted or - what many like - https://openwebui.com etc.
Some config for a home network:
https://github.com/ollama/ollama/issues/703
To use that given model, try something like:
ollama run https://hf.co/bartowski/deepseek-r1-qwen-2.5-32B-ablated-GGUF:Q5_K_M
I wonder how long until someone creates an open source way to speak with it, or any other open source LLM.