I think the fallacy of LLM code assistants is that it feels as if you're going fast and they're doing a lot of work. But then you have to validate them, correct their course, and you end up with a ton more work, still typing it all out anyways.
It do be addictive though.
It do be addictive though.
Comments
Sure it takes a ton of time back and forth fixing errors, but most of the better models are good enough to figure it out eventually.
But most of my hobbies are too esoteric for them to help.
I want a sandbox where I can let the agent run a particular command, generate test files, and fix the code until all tests are passing etc.
This should be great for a parser-sort of library where tests are simple input/output.