Wasn't that crazy to get WSL2 GPU acceleration sorted out for local vision model fine-tuning, and GPU-boosted local LLM chat with a quantized Llama. Pretty sure I'm never buying mac again

Comments