But then again, notice the flow of this conversation. My prompts are very specific, but the LLM is adding in a lot of context with the usual crazy ego boosting, reminding me of the old Eliza programs. I need a boost to my ego sometimes, but I am wrong occasionally.
When you get into deep analysis like this it does seem more than just sentence completion, The chocolate , popcorn test is a classic turning test for AI.
There are theories that human reasoning involves entangled states, so when we determine an outcome it’s actually the wave function collapsing into a single state of course one problem with that theory is what collapses the wave function.
We don’t know that because we don’t know the underlying of how human intelligence works, there are theories that propose that it involves quantum superposition.
Well we don’t actually know, since, we aren’t likely to have quantum computers with enough Quebits to even test such a hypothesis in our lifetime. So AGI might not be achievable in this century at least.
Comments
https://open.substack.com/pub/mcvresearch/p/prospectives-on-higher-dimensional?r=2093u9&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true