It'll have to understand the cited source, in the wider context of other sources, and the quiet unstated context of the prompt and LLM output. So it'll have to do all the intelligent work that LLMs can't do.
That makes it the actual artificial intelligence, rather than the LLM.
You can train an LLM by making it provide sources and then penalizing it when it misrepresents them. It won’t be 100% accurate, but this kind of pattern matching is well-within its capabilities. You can test it yourself and see it does much better than it used to.
But your solution is "Well, just fact-check the LLM's output." which, in other words, means not using the LLM to begin with and instead just continue looking stuff up the regular way, through actual sources.
Comments
That makes it the actual artificial intelligence, rather than the LLM.
has been the entirety of this discussion for multiple hours now and I get the feeling this will continue to be the case for yet more hours to come.
and it's being presented as this profound solution that every AI-skeptic is just too stupid to accept.
*See Harry Frankfurt for the definition of bullshit.
In my experience, the links work and are valid.