Profile avatar
aarontay.bsky.social
I'm librarian + blogger from Singapore Management University. Social media, bibliometrics, analytics, academic discovery tech.
590 posts 2,845 followers 306 following
Regular Contributor
Active Commenter

In the academic education context, everyone talks about Microsoft copilot, ChatGPT, there are courses etc. But it seems people are confused how they relate (or not relate) to acad search engines that are "ai enhanced" eg Scopus Ai, Primo Research assistant, scite assistant, Consensus, elicit etc

youtu.be/byy19WPLPBQ?... another interesting one. Suggests even contextual embeddings from BERT are still having the"bag of words" assumption as the attention mechanism doesn't take order into consideration while positional embeddings are fixed and not varying by input.

[Watched] Word Analogies don't Hold in General youtu.be/u6EmngzBUEU?si… - very interesting, I've always read people saying the vector math for word2vec embedding for King-Man+Woman = Queen is a bit of myth & this explains why. (1)

Our fourth and final keynote speaker at the Summer Conference will be @aarontay.bsky.social, Head of Data Services at Singapore Management University Libraries. AI-Assisted Literature Review Tools for Undergraduates: Restrict, Curate, or Embrace? Our early bird rate ends today: bit.ly/3VFUx4f

Wild guess Elicit Research reports would be top left quadrant (hand crafted, deep). Maybe similar for other academic versions, Undermind, Ai2 ScholarQA ? etc but with varying amounts of depth..

The Differences between Deep Research, Deep Research, and Deep Research leehanchung.github.io/blogs/2025/0... - interesting attempt to tease out differences between deep research implementions

After playing more feels like the normal RAG generated answers while imperfect is not the major source of crazy results. It seems when you use the main search it will sometimes decide to use one of the other "guided tasks" which dont work well. e.g the "seminal paper" it finds is often horrible(1)

Really nice to read a technique paper you read 2 years ago and now understand pretty much all of it rather than just the gist

I was talking to a researcher and he told me he runs through qs through FIVE free LLMs. Gemini, chatgpt, Microsoft copilot, deepseek, grok etc. Kinda like the advice to read laterally? Thoigh I wonder if their errors are correlated

I've been insisting on distinguishing "Ai tools" ResearchRabbit, Connectedpapers etc from Scite assistant, Scispace, Scopus Ai that use retrieval augmented generation (RAG) to generate answers not just because they fundamentally work differently but RAG or "answer engines" pose far more serious q(1)

The students in this study seem pretty well educated in the use of GAI. Even parts where the author imply the student are making errors/misconceptions, I'm not quite sure if that's the case...