Only this time at massive scale, tremendous environmental impact, and with people who present themselves as geniuses claiming that the machine will actually think. At least with Eliza, we techies knew it was software.
Yeah. I understand that LLMs operate differently and all that, but they’re fundamentally still “system 1” pattern matchers, even if they manage to identify and generate larger level abstractions than just sentences and words.
But they don’t do “system 2” (logic and reasoning) or even long-term memory or prioritization or any of the larger level activities that go into reasoning. System 1 is very powerful, but it isn’t sufficient to be a candidate for being part of an actual control system.
When it comes to a therapist, it’s fine to use as an exercise in self-reflection, but even human therapists don’t really know how to produce consistent results. Trusting a machine that has none of the mechanisms that allow humans insight into each other has … downsides.
Comments