"AlphaEvolve enhanced the efficiency of Google's data centers, chip design and AI training processes — including training the large language models underlying AlphaEvolve itself."
"AlphaEvolve has helped to improve the design of the company’s next generation of tensor processing units — computing chips developed specially for AI"
Pretty sure that's how you get SkyNet. Letting AI be involved in its own improvement means we know even less of what goes on inside the black box.
Yeeep. The hallucination engine can be a good thing, but it isn't meaningfully levegerable outside of axiomatic contexts that can be deterministically verified.
Seems like they basically said "this thing is great at hallucinating stuff that sounds plausible, let's test thousands of hallucinations to see if there's something viable there" and it did work better than trying stuff at random.
It matches what is publicly demonstrated by projects where similar setups compete in math competitions with known answers, differing only in scope and scale.
Definitely sounds feasible, if ludicrously expensive, to spin the wheel of delusions until you can verify cohesive answers.
That's actually not too different from what's involved in our own truth testing / derived heuristics. (That's not to draw a parallel between LLMs and human consciousness at all, but rather in the epistemic process). We don't have any particular access to the truth in a given case. It's hard work.
I am highly skeptical of any claim of LLMs being able to evaluate the truth-value of prompt or even its own output. Transformers do not perform epistemic processes, after all.
Yeah. You should be. That's the trick. Automatic proof verifiers are old tech that works on axiomatic principles. LLM generates novel output in a format they recognize, then the verifier tests it for cohesivenss to wee old out the noise.
No thoughts of any value. Just that it'll probably happen on some timescale and then who knows what comes next? I've read books arguing various sides, but nothing fully convincing.
Comments
recursive self improvement goes brrr
it is very much not the classic "AI improving its own code and ascending to godhood" scenario
Pretty sure that's how you get SkyNet. Letting AI be involved in its own improvement means we know even less of what goes on inside the black box.
Definitely sounds feasible, if ludicrously expensive, to spin the wheel of delusions until you can verify cohesive answers.
What are your feelings towards ASI? Sorry if it’s in your books, I’ve only followed the comics