Problem is in the military world the environment is highly fluid, with novel situations arising all the time, some involving complex mixtures of practical and ethical judgment. So many edge cases. So how do you test & evaluate in advance?
Comments
Log in with your Bluesky account to leave a comment
The problem with many hallucinations is that when they happen, even in the most advanced ChatBots, they happen unpredictably, and do so catastrophically. Sure humans can also fail catastrophically, but the solution for a human operator in combat should not be to bullshit your way out of it.
but all failure is individually unpredictable? if you/they predicted it, they wouldn't do it? and again, whether human operators in combat do or do not bullshit their way out of it is empirical?
Comments