Techno-determinism is baked into corporate AI educational materials. Eg. Google's course suggests hallucinations are solvable w/ > data, > regularizing, > human-given context and constraints. That's false logic. Hallucinations are inherent. Improving these 4 areas will make them less recognizable.
Post image

Comments