Take Scopus ai or Primo Research assistant that takes your input and asks a LLM to convert your input into a boolean keyword search. I argue in this case doing a lot of typical prompt engineering tricks is suboptimal
Comments
Log in with your Bluesky account to leave a comment
Eg some research suggests emotional or motivation appeals eg bribes seems to get better results from LLMs like chatgpt. But does it really make sense to input such statements when you know Primo RA is just going to try to convert what you type into Boolean keyword?
Amazing thing if you try such long winded prompt engineering tricks, the LLM is typically smart enough to just ignore irrelevant parts and come up with a not too crazy boolean search (but typically too broad but that's another story). Its insidious because people think their extra prompt helped
To be fair the extra prompt engineering trick may have made the LLM do better at the boolean but we don't know. Pretty sure research on LLMs people rely on don't test the narrow task of coming up with boolean
The same i think applies if the "ai academic search" is using at least in part semantic search based on embedding models. Technically its the difference between transformer Encoder models and GPT type decoder models.
Comments