With just about every piece of technology up until now, when it's not working correctly that is apparent to the user in some way.

The reason LLMs and today's versions of "AI" scare me is that users think they are working correctly even when they're putting out nonsense.

Comments