Gee, I wonder if there is a reserve army of the academically unemployed somewhere, just full of people who would be happy to work at the Chronicle, answering questions that anyone had about job prospects, faculty governance, campus political issues, and the like
feel free to to speak with some Library / Information Science folks.
I work with a plethora of sources like this that would benefit from such an approach. bibliometrics and LLMs make sense when crunching large amounts of text.
Maybe so but (1) I think the burden is on the implementer to demonstrate that power usage/environmental impact is responsible and (2) you understand an instinctual mistrust of the tool that happens to be existentially threatening our industry, no?
(3) hallucination is generally not due to external content in training data, it is intrinsic to LLMs as a whole. Unless Chron is bundled with an unprecedented fact-checking tool, it will give some inaccurate info inevitably.
UGH! Computer summaries and hallucinations instead of having a nuanced discussion with an experienced human who has lived life longer than you, where both humans can ask questions and exchange information and perhaps establish an on-going connection across generations.
I'm wondering what possible value this would have v. regular old search, unless they think introducing hallucinations into the interpretation of their archives is a value. Even without hallucinations, why would you not want readers to actually read!
This has been my response to most "AI search" functions from both the provider and user side. The search is better at both driving traffic and giving me (the user) access to what I'm looking for.
Indeed, and when testing these things in areas where I'm knowledgable, the ways that it often gets things wrong, not even in a "hallucination" way, are important. I often see summaries that are like 5% off that I'd never know if I hadn't read the original source already.
I tests I've read about (a lot of the summary tools I've seen are paywalled) point out that the AI compresses more than summarizes, which would invariably lead to slippage in accuracy. It's a worrying trend that is bad enough for art and potentially dangerous in other areas.
If somehow it were just a dumb gimmicky AI chatbot that made things up about their archive, that would be annoying and wasteful but I guess also fine. It's really concerning as a red flag indicating a management group that has no idea what they are doing
Their job listings are increasingly difficult to search, they’re all too happy to give column inches to fascists like Bauerlein and Rufo, and now this.
This is appalling. What message does it send for the Chronicle to deploy a tool that puts a thin technological veneer on plagiarism and then aim it at their own work.
Comments
do I have that right?
are people not familiar with how OCR, indexing etc has developed over the recent decades?
what is your preferred interface for interacting with 130k articles? plain #boolean ?
I work with a plethora of sources like this that would benefit from such an approach. bibliometrics and LLMs make sense when crunching large amounts of text.
https://www.businesswire.com/news/home/20241203262292/en/The-Chronicle-of-Higher-Education-Introduces-Chron-an-AI-Powered-Higher-Ed-Research-Assistant/
you are purposefully misrepresenting the issue, it's both unbecoming and fucking dumb.
Chron isn't what you want it to be, so don't pretend it is.
Fucking sucks to be you I guess.
AI in education? smells like bullshit
information processing, retrieval, and analysis: tools evolve.
of course not
so why should the Chron care ?