Vibe Coders are the equivalent of those artists that paint by pouring an unmixed bucket on a spinning canvas. The randomness between the artist and result limits its utility (and possibly the degree to which it can be _their_ expression).
It's not reproducible, and it's not really editable. They can spin the canvas again and get something similar. In programming I want unit tests... I need to know this will always, consistently, produce the correct result. I want to edit it, optimise it and use those tests to verify.
I still can't see any context where a vibe coder can do the thing I'd hire an actual coder to do.
They can do things where you can't afford an actual coder but want to look like you can, like personal websites, but that means MUCH lower salaries.
To add to this train of thought, I am under the impression that AI will bring customizations, personalization and integrations to a whole new level. What was viewed as both risky and costly will become BAU. Applications and solutions will become a starting point, rather than a final product.
I think I'm most concerned about the degradation of quality, and especially stuff that more experienced programmers will have to tear down and fix completely. I'm still not entirely convinced that generated code saves that much time, rather that it makes code review harder and moves bugs to runtime.
Legacy software engineer might become a trendy thing.
In the initial turbulent phase of not so well thought through AI software, wide areas of tge industry may create software without preserving the mental model behind it as it was the case with "all engineers who knew the system left"
I feel like that's the more concerning part, when it's *subtly* wrong, and more likely to be missed during code review. And there's no thought process to refer to for how the codebase evolved, if a single code generation call produces "one big commit" (or has that gotten better?)
Nope. And even if you get it to provide the thought process, while it might be helpful to the generator as it has more time to think, it's nothing to do with actual thought process that produced the code
It's more of a justification than a thought process, I believe - acting (generating) first and coming up with a "matching thought process" later.
I think that some impls do a bit of internal back-and-forth like making the LLM talk to itself kinda? but that's a sorry excuse for a thought process.
The general trend with LLMs is that they're good at creating *convincing* output, not strictly correct output. Lower barriers to entry are usually good, but I'm still concerned about whether this unverified generated code will end up in critical places just because it looks convincing.
In terms of not knowing language syntax, I get reminded of an anecdote about when a company somewhere tried to introduce "prompt engineers" for concept art. When asked to make a small edit, add/remove a character, etc. the best they could do is generate a completely new and different image.
Thanks for this perspective; it’s refreshing to see a more nuanced take here.
I think this highlighted (for me) some contradictions I struggled with: not all software needs to be the highest quality artisanal stuff - and there’s too many engineers who look down on others…
We can all coexist and work on solving our own problems and build software that helps others. Sure, salaries for the “top end” of developers will be even more difficult to achieve, but with more people getting into the market maybe that’s ok.
Comments
The existence of cars for non-pros does not destroy careers of race car drivers. Nor do car manuals destroy careers of mechanics.
I like the take, though not sure I buy it, yet.
They can do things where you can't afford an actual coder but want to look like you can, like personal websites, but that means MUCH lower salaries.
In the initial turbulent phase of not so well thought through AI software, wide areas of tge industry may create software without preserving the mental model behind it as it was the case with "all engineers who knew the system left"
Most code I get from LLMs is good looking but wrong in ways that are unlikely to be spotted at first glance.
I think that some impls do a bit of internal back-and-forth like making the LLM talk to itself kinda? but that's a sorry excuse for a thought process.
I think this highlighted (for me) some contradictions I struggled with: not all software needs to be the highest quality artisanal stuff - and there’s too many engineers who look down on others…