It would be wildly foolish it is to rely on any sort of AI to make decisions in tense international situations.

But using an LLM to do so is terrifyingly stupid. It's a category error to think that a machine designed to generate plausible text would be good at reasoning about existential risk.

Comments