Still using Copilot, I’ve only briefly toyed with local models. Need to give it a proper try at some point. Local models are getting better though, specially the smaller ones. I can run models requiring up to 48GB on GPU memory, but inference becomes quite slow, so the smaller the faster they are :)
Comments