It would be wildly foolish it is to rely on any sort of AI to make decisions in tense international situations.
But using an LLM to do so is terrifyingly stupid. It's a category error to think that a machine designed to generate plausible text would be good at reasoning about existential risk.
But using an LLM to do so is terrifyingly stupid. It's a category error to think that a machine designed to generate plausible text would be good at reasoning about existential risk.
Comments
LLMs with zero interpretive ability (or understandings of physics) are going to find so many more wrong turns … And their work will be opaque and mysterious so people will bow to the black box.
https://bsky.app/profile/johnmashey.bsky.social/post/3lo2oom32ic2x
So basically we'll all be able to mind read your government.
(The answer unfortunately is yes, but the limbo bar is getting pretty close to the floor)