The fact that AI dev tools haven't been wedded very well into language servers in IDEs after several years now is the strongest indicator out there that this tech still has a very long way to go, and we're overestimating its short-term impact while underestimating its long-term impact
Comments
And Agent mode is checking for errors after code was generated
Asking as a compiler dev, trying to understand how my roadmap should change…
LLM generates code -> syntax and semantic checks -> feedback fed to LLM -> repeat until exit condition met
Write code -> LLM analyzes changes, suggests where to update -> user-LLM interaction kicks off
LLM "tests" code by running it -> runs with debug mode, symbols on -> inspects real values for symbols, feeds back to LLM -> loop to emit more code (E&C), repeat
I see potential downsides to the source code no longer being the medium of shared “understanding” between human and LLM.
But yeah, a lot of this could be done via tools as well. I'd argue that also has a long way to go!