I tested performance of running DeepSeek R1-14B-GGUF-Q6_K on my laptop using llama.cpp:

3.7 t/sec using --numa distributed
2.5 t/sec without numa

CPU: i5-1340P - maxes out all threads
RAM: 2x16GB of DDR5 using ~7GB

Going to try Q5_K and Q4_K next.

#buildinpublic

Comments