Using LLMs to write code you don't understand is just Dreamweaver nightmare HTML 2.0. I'll die in the hill that writing a prompt to generate code doesn't make you a developer. Anti-skill bullshit.
Comments
Log in with your Bluesky account to leave a comment
Today I had the thought that as long as I have comprehensive, understandable unit tests, the code under test can be relatively indecipherable. An extension of the “black box” metaphor. Still pondering this one.
It's rare that you can or should test every single case. Usually you test example cases and expected corner cases based on how the code works. "Corner cases" are defined relative to the code function.
If you don't understand the code at all, you can't know what you should test.
BTW, this is also why having all testing done by a fully independent QA usually result in shit unreliable systems. Tests need to developed by or at least in concert with development. Having QA do some other stuff in *addition* is another matter.
This. Anybody that has looked at modern compiler assembly output knows you test the layer you’re in because the underlying layers will approach indecipherable.
I just got patronized by someone to use Cursor bc companies want devs to ship code fast and that’s what they’re looking for apparently… like wtf - so they want me to ship code I don’t fully understand as long as it’s fast? Might as well just ask someone with 0 programming knowledge to do that.
Also, AI can be used responsibly by junior devs or those that are still learning. But to essentially say that you’re going to get left behind if you don’t start using Cursor to code is out of touch with reality.
I can totally see that happening - it spits out the answer so it removes the whole process of problem-solving, researching and coming to conclusions on your own.
That makes no sense, it’s just a layering change. Problem solving, reasoning is all the same, just with a different tool set. I lived through the “if you don’t write assembly, you don’t know what the computer is really doing” noise. And here we go again.
I think AI can bridge the gap between dev levels (as long as you can understand the output), it can quicken upskilling, support new tool adoption, and enhance debugging.
Cursor seems to be like it will increase the likelihood of a messy codebase.
I use LLMs all day but I need to examine and review the output before adding it to my codebase. So I prefer to copy/paste from a chatbot rather than just hand my editor over to an LLM.
“might as well ask somebody with zero knowledge to do that” this is exactly the problem as with next generation of coding agents, “everybody” will be a coder, so the soft dev salaries will dive, and they will eventually be replaced fully by AI, with AI product manager, AI senior manager etc
I just don't think they do. Some people use it responsibly but someone who has never done any real programming work using llm stuff to generate code, that's a red flag's red flag.
it has to eventually just fuckin fall over, right? Or they spend months trying to do the equivalent of surgery with oven mitts on, and try to hire someone to fix it who tells them it's garbage? I just don't see how they get to anything that is a better use of time than ..
Yeah, I think this stuff is self-evidently bad for tech in the long term but in the short term they're making tons of money off of it so it's not going to stop.
I'm sure the people selling it know it's snake oil, and a blatant power grab, the level of critical thinking and actual computing experience needed to know this is not advanced but still far from the public's grasp
with agents like ClaudeCode, they can write and run their own code read the output check for errors and modify the code to fix it, give it two more generations (6 months) of coding agents, and I am not sure how to event compete with it, in hands of highly trained soft dev it can be a powerful weapon
I've used them. The better they get, the bigger the problem: when nobody hires or trains juniors into "highly trained" anythings, how does anyone develop the expertise to supervise the output? The process is already a blackbox, soon the code it outputs will be too.
the answer isn't immediately obvious unless one has studied any kind of philosophy or systems dynamics or infinite series or farming. Yes, 11,000 year old farming is sufficient. Farming is not producing a crop this year, it's cultivating a continuous food supply.
Here’s a question. How do you know the compiler output correct assembly? Or that the silicon implements correct cache invalidation for your Java memory scheduling?
those examples are rigorously tested by intelligent people who were capable of designing them. LLMs are being sold as capable of producing code usable by people who don't understand it well enough to verify or test it. That's how.
The first assemblers were not rigorously tested or verified anything like you suggest. They were as broken and untrustworthy as LLMs, compilers regularly produced incorrect code.
I used to do web surveying a lot back in the mid 90s, I tried every tool there was. Dreamweaver at least did clean HTML. Word would generate 7Mb files which were literally 99% Word-specific crap. And PageMill fucked up radio buttons.
The other hilarious one was Macromedia Fireworks. My old university used it for their top landing web page, for about two hours, until we yelled at them that when you rendered it text only, it had zero (0) text on the entire site.
I remember Fireworks. I spent an entire Christmas week (probably 1998) coding an auto insurance website in textpad before any tools existed. Nested tables, one pixel off in Netscape… the nightmares…
Don’t think I ever used that one. PageMill was older and crappier. FrontPage? No thanks. Dreamweaver got traction where I was, as we had site licenses for Shockwave and Director.
No, but it speeds things up. You still have to know how to approach your objective. It also makes it a lot easier to jump from Python to C to TypeScript etc.. if you look at it a different way, AI code generation is just another layer of abstraction between the coder and machine language.
I want to be clear that I don't have any issue with developers who know how to code using whatever tools help them accomplish their work. It's developers who don't actually know what they're doing generating code they can't explain or modify that is the problem
AI doesn't comment the code it writes to tell people what each part is doing? (Yeah, I know, that's just super-old-school programming 101 from the 90's).
That’s the equivalent of copying and pasting code from Stack Overflow without trying to understand what it does.
LLMs can help if you use them like you would use Stack Overflow in a correct way.
Sure, but I've seen enough abuse of SO to extrapolate the mountain of awful code we're going to getting from the equivalent of entire apps being built with SO snippets, haha. The more powerful the tool, the more likely it is to be used badly by beginners.
I don't hate any of this stuff. But I hate the idea of developers not knowing how code works, and if you don't think that's legitimate, you don't know what you're talking about.
"LLMs aren't trustworthy and might have trojaned code, lets fix it with LLMs" is... a stance I guess.
Maybe it's just me, but software stability and reliability being left to an arms race between untrustworthy LLMs doesn't sound like a future I want to live in.
Yeah, but Betamax was better than VCRs, but VCRs were what we got. Neither one was as good as laser disc. Unfortunately, roads get paved in the easiest/cheapest ways possible.
Meanwhile, people will call those who lobby for quality over quantity Luddites.
Oh, it's very true. I am happy to be called a luddite.
But, it's not what IS fastest/easiest/cheapest, it's what people think is - which is often different. And I think people are wrong about LLMs for any software that matters at all. They make the easy part easier, and the hard part far worse.
This is just a euphemism for gambling.
People that know what they are doing aren't vibe coding. They come in with a plan and at least some understanding. Vibe coding is just the flavor of the day for "I have no experience so I flipped a coin 10 times in a row and here's what I got. Envy me!"
I see what you’re saying. The way it was used (on a podcast I was listening to) phrased it as those tinkering w LLMs w coding but did have a background. I didn’t realize it applied to those w/o background in the field.
Haha burn. Dreamweaver was such crap. Ah those glory days of everyone pileing into web design and churning out boiler plate hello world pages for big fees and sites you could not change a paragraph without redoing the whole 'project'.
My favorite was when it would absolutely position like 90% of the elements so it would look perfect on the computer that created it and disasterous everywhere else.
Most of the time, it is faster to simply type the code directly instead of trying to be as precise as possible in natural language then reviewing and correcting the output
I get what you're saying but I've had a slightly different experience with Copilot in VS Code in that it has understood what I've asked from it and has produced fairly decent code, saving me some dev time. I'm just curious what others think though and I'm still on the fence.
There may be a time in the future when we will be able to trust the code generated by an LLM just like we now trust the code generated by a compiler, but we are not there yet.
Wow, ummm, 5 years, when you say it like that it feels "soon".
But yeah, I think so. There's so much money chasing better programming agents, and progress over the past 2-3 years has been so significant, I think 5 years is a pretty reasonable bet.
Depending on what you would consider a passing test. If we are talking about writing full programs based on textual description of requirements, I would say that pretty much requires AGI.
It's just proving that we still need people who fundamentally understand how it works and how to interpret and alter the output. But capitalist efforts are currently drooling over the idea of downsizing workforces and replacing them with cheaper labor, so we're in for a wild ride in the short term.
Just replaces stack overflow google searches and my own dev wiki. You still need to know what you’re looking at, how it all fits and works, but imho does save me time and add views in my searches.
For me, the issue isn't when people who understand how it works use it. It's the scenario where you have young developers who don't have a strong grasp of how it works using generators to write code that they don't understand and couldn't write.
I started making a game in JavaScript for fun using Cursor AI. I didn’t know JavaScript well, but the AI got me a prototype game in minutes and I started to understand the language quite a bit better as I figured out the source of bugs, etc. The best thing was I was engaged from minute 1.
Compare that to the typical language learning experience where the tutorials start super basic (how to make a list, etc) and move to more and more advanced topics. I will say learning your FIRST language is prolly best done that way, but not the 5th.
Yes but does anyone outside of a formal class even do that for their first language? Assuming that a person's got decent keyboarding skills, nothing beats learning a language by speaking / writing it out. The generators only become useful once a person's fluent enough to know what they're reading.
What if it helps others coders with disabilities such as dyslexia? I can’t code with others usually but code fine by myself and had a bunch of discombobulating practices that would fix it so others could read it. Now my in house agent helps me to right my wrongs.
AI is societal snake oil: We don't know how it works, and we can't predict what it's going to do.
Sadly students are now illiterate insofar at reading Orwell/books.
Comments
It's rare that you can or should test every single case. Usually you test example cases and expected corner cases based on how the code works. "Corner cases" are defined relative to the code function.
If you don't understand the code at all, you can't know what you should test.
I use LLMs all day but I need to examine and review the output before adding it to my codebase. So I prefer to copy/paste from a chatbot rather than just hand my editor over to an LLM.
Just move up the stack and quit whining.
LLMs are in the same space as assemblers in 1965.
And that’s how I got into Perl.
There was an Adobe product that was close-ish, but it was short-lived and nobody mentions it any more. Can’t remember the exact name.
LLMs can help if you use them like you would use Stack Overflow in a correct way.
I can tell you from experience that a new dev with AI tools is miles better than any outsourced dev work I’ve encountered.
https://www.schneier.com/blog/archives/2025/02/an-llm-trained-to-create-backdoors-in-code.html
https://www.schneier.com/blog/archives/2023/12/ai-and-trust.html
It's the modern "reflections on trusting trust":
https://www.archive.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf
Maybe it's just me, but software stability and reliability being left to an arms race between untrustworthy LLMs doesn't sound like a future I want to live in.
Meanwhile, people will call those who lobby for quality over quantity Luddites.
But, it's not what IS fastest/easiest/cheapest, it's what people think is - which is often different. And I think people are wrong about LLMs for any software that matters at all. They make the easy part easier, and the hard part far worse.
People that know what they are doing aren't vibe coding. They come in with a plan and at least some understanding. Vibe coding is just the flavor of the day for "I have no experience so I flipped a coin 10 times in a row and here's what I got. Envy me!"
Word-HTML-Export
It’s probably just plagiarizing stack overflow anyways, but at least they make you explain it.
This is not the way that a serious developer codes with an LLM.
The correct way is to use an LLM to generate small, understandable pieces of code, unit tests and boilerplate.
Learning this is mandatory at this point.
https://bsky.app/profile/arstechnica.com/post/3ljpxt7klmk2f
Or was it moving away from the switches on the front panel? Damn.
But yeah, I think so. There's so much money chasing better programming agents, and progress over the past 2-3 years has been so significant, I think 5 years is a pretty reasonable bet.
Tutorials start with the basics for the same reason kids crawl before walking. You're not ready for intermediate concepts without the fundamentals.
Sadly students are now illiterate insofar at reading Orwell/books.