No hype, just honest discussion. What role, if any, should AI/LLMs have in (1) doing science research and (2) helping us write about our research? 🧪
Comments
Log in with your Bluesky account to leave a comment
Given the hilarious ecological cost, probably none. But 1) until hallucinations are dealt with, you’re going to spend just as much time QA’ing the LLM as you would doing the research and 2) it’ll save you time, but I wouldn’t put any og research in a Chatgpt; I don’t trust them
They should be let loose. All of science is peer reviewed at some stage, so no weird flights of fancy are going to get put out there for general consumption. The recent discovery of new proteins (sorry, can't remember the field), and that guy who faked room temperature supercooling support m thesis
The philosophy of science is indeed but a social construct and a largely harmful one at that. Science mercifully has the strength to ignore it and do productive work. Which benefits me. And you.
AI can add value to science as it is about uncovering previously unknown relationships. Philosophy of science is different, it is indeed a social construct based largely on opinions and as such AI hasn't got much to add.
1.It should act as a catalyst for research, like a calculator. getting things done faster
2.It should aid communication, especially in breaking language barriers.
Research relies on creativity which AI cant replace but it can handle tedious tasks letting us focus on creative problem solving work.
For (2), do you feel this way because you view natural language as the ultimate source of "Truth" on which the validity of research is placed?
What about viewing data and math at the ultimate sources of truth? What if an AI generated sentence accurately expresses the truth of underlying datasets?
My view on (2), which by the way is pretty much my most firmly held opinion regarding science, rests on the fact that science is a human endeavour. We need to understand, ourselves, what we are doing. The moment we outsource our writing and thinking to generative AI - and I have already seen /1
this happen in some students - we become lazy and start to accept output that 'reads well' but that we don't QUITE understand. And so science dies, and humans become something a little less than they were. It is an abomination.
My view on generative AI 'art' are, if anything, even more forceful.
This puts a finger on what I see as the divide between useful and potentially harmful applications of AI. Some AI uses in science are perfectly OK. Let me give you a concrete example. My research group works a lot with tomographic data of fossils, and manually segmenting these data /1
I doubt any AI can 'replace scientists' any time soon. Even something advanced as OpenAI's O1 model.
AI doesn't have physical form and can't visualize the 3d world in its head. It does not excel at extremely long term planning and prospection of what's scientifically possible
Science writer, former science writing instructor ,and anthropologist here. Why on earth would you use an LLM to write an introduction section for your new paper? Can’t you do it yourself? I don’t understand the point of your question.
Based on the flubs seen so far in papers submitted to peer review, graphics should absolutely not be used.
Ai could be an incredible research tool if it could properly vet and furnish references.
Computer vision and expert systems can also make research more efficient but require keeping an eye on.
1. Research methods should probably be conceived by humans, but AI can assist with analysis of large datasets that would be unfeasible with traditional methods.
2. LLMs could serve as an editor when writing papers, however all ideas that are presented must be original thoughts by the author
Comments
So only humans can truly interrogate research, develop opinions and put the humanity into writing - giving it authority and integrity.
In science, this is paramount.
2.It should aid communication, especially in breaking language barriers.
Research relies on creativity which AI cant replace but it can handle tedious tasks letting us focus on creative problem solving work.
2) Absolutely none whatsoever. Under any circumstances.
What about viewing data and math at the ultimate sources of truth? What if an AI generated sentence accurately expresses the truth of underlying datasets?
My view on generative AI 'art' are, if anything, even more forceful.
Though, I don't 'quite' understand how this phone runs machine code to render the screen. We built abstractions on top of that.
Will AI 'steal' science, or abstract some current methodologies away?
Agree we should throw all the ai generated slop and 'art' out the window haha
AI doesn't have physical form and can't visualize the 3d world in its head. It does not excel at extremely long term planning and prospection of what's scientifically possible
The fact it can do this implies we haven't fully internalized the level of utility these things have for our work.
To ignore this ability is to keep our heads in the sand.
2. None.
I think it's imperative to recognize the environmental damage done by normalizing the use of today's generative models as a commonplace utility.
Ai could be an incredible research tool if it could properly vet and furnish references.
Computer vision and expert systems can also make research more efficient but require keeping an eye on.
As with ANYTHING, due diligence is required by writers and readers alike. Peer-reviewing exists for a reason.
2. LLMs could serve as an editor when writing papers, however all ideas that are presented must be original thoughts by the author