A similar feedback-based experiment was previously done by the "neurons learn to play Pong" paper published in Neuron in 2022. It was a better system than the butterfly because improvement (learning?) based on game performance was quantified. Although
I detest the language of "its as if the brain is living in this simulated world" -- its as if the brain is being fed signals that are informationally isomorphic with the signals required for a von neumann machine to display those graphics!
I had a hyuuuuge dose of mushrooms this weekend and I had this weird moment where there was like a blue light pattern God came to me -- explain THAT if Im a butterfly organoid!!
The only ethical limits in place are "eh, 10 million neurons is probably not enough for meaningful consciousness". Meanwhile mice have only a bit more at 70 million neurons.
Ethical controls should be proportional to cortex neuron count until we figure out more about consciousness.
The "training" we do to organoids that communicate with simulated environments works, but we don't know by which pathway it works. If they are experiencing phenomenality, are we using a carrot or a stick? Likely both, but worryingly possible that we are only using a stick for all we know.
In the papers I've read on organoid training (currently only 2 papers), the training was accomplished by either applying white noise as a negative stimulus, or pure sine waves as a positive stimulus. And the white noise was used more often.
Comments
Paper link: https://www.sciencedirect.com/science/article/pii/S0896627322008066
We need to figure out the very specific brain structure and activity type that correlates with phenomenal consciousness ASAP
Ethical controls should be proportional to cortex neuron count until we figure out more about consciousness.
We know little of the actual brain signals that correlate with phenomenal suffering, which makes me even more concerned about organoids.
But I do in general strive to encourage people to support the ethical experimentation on animals.