I do, and I know that LLMs have a tendency to create new and potentially dangerous things that look innocuous to both humans and human made algorithms. There is always a chance that something approved by experienced humans and even the best code checking algorithms is going to be vulnerable or
Poisoned by the LLM. Not to mention LLMs have somewhat predictable outputs, (not predictable by humans, but other algorithms). Meaning that while this may be unproblematic for a time, these patterns may be caught on by nefarious parties and exploited, making every vibe coded shit program vulnerable.
You are absolutely right, that happens. Something becoming more common is "slopsquatting" where hallucinated library names are created with ill intent.
Again, code reviews by what? Humans, and human created algorithms? Sure, that'll work for a time, but the progress is so far exponential. Meaning the latest LLMs (or newer forms of AI chatbot) will produce highly problematic code that not humans nor the best code checking algorithms can check.
And sure, LLMs are human made too, but for how long? There's an absurd myriad of problems that this process introduces. There is an existing security standard, security via obsolescence, that is actually relevant here. Use known good security standards instead of relying on the slop machine to half-
ass both your job and your company's security. The more vibe coding happens, the more obscure the old standards become, and the more secure by extension.
Comments
That's why code reviews exist.