feels to me the seminal papers it finds is just totally weird or at best sort of relevant but way too far away. e.g. asking about large language models and getting LDA and Pagerank has suggested seminal papers!
Comments
Log in with your Bluesky account to leave a comment
For example here i even specify the context of Binary independence model to be information retrieval when really you shouldn't need to, yet it still gives me as seminal papers "code switching" which is sociolinguistics??
the main issue is even if i ignore this guided task and use the main search, it will still randomly decide to forgo the usual RAG answer (which tends to be okay) and instead try to find "seminal papers"...and there's no way to even get a clue why the results were surfaced.
I also find Web of Science RA confusing as heck, besides the RAG search it seems there are like a ton of other tools/workflows available. There's "seminal papers", topic maps, Co-citation map (not same thing), topic over markness models, top authors all with diff visualizations. It's really a mess
while it sounds nice to be able to ask for papers on topic X by authors from affiliation Y with citations > Z, it also means you can accidently trigger a workflow you didn't expect. I saw that in a demo where one query gets you a normal RAG answer another slightly diff gets the top author workflow
WOS RA competitor Scopus AI has similar ideas of having specific workflows, but they are more restrained offering , "concept maps", "topic experts", "emerging themes". The topic expert like WOS top author can be hit or miss, but generally i find it less confusing since they always offered
Comments