That’s a wild backdoor…
Reposted from
Janelle Shane
1. LLM-generated code tries to run code from online software packages. Which is normal but
2. The packages don’t exist. Which would normally cause an error but
3. Nefarious people have made malware under the package names that LLMs make up most often. So
4. Now the LLM code points to malware.
2. The packages don’t exist. Which would normally cause an error but
3. Nefarious people have made malware under the package names that LLMs make up most often. So
4. Now the LLM code points to malware.
Comments
Developers for many years now just install random packages from pip/composer/npm, etc. And... run them, with full access to the disk due to the lack of any containerisation and the lack of any IT hygiene.
Don't people check their code?
Don't people read things before they run it?
AI for coding has ethical problems, but that folks don't check their build before running... First order negligence.
I'm building up some simple tools at work. I'm using AI to build out some of the functional logic, my field, but I treat everything from ms copilot as volitile. I'm researching modules before I'm adding them, and making sure I understand the loops before I run anything
I built paranoia into my workflow to make sure I'm checking before running.
I think I need to make sure that everything I make has distinct names.