Ok, LLMs-we-call-AI do smart-looking things all the time, and many kinds of AI including LLMs can be used to solve problems. Their quantitative reasoning needs a lot of work though. Agree that as long as LLMs struggle with interpreting images, ratios and tables, this would be a tough sell.
I like multidimensional keyword sub-setting features with large text databases even if the output is often garbled, and they are extremely handy for translating text from one language to another
Very weird thing to like. And regarding translations I'm not sure LLMs are inherently better than specialized solutions, also languages/cultures with implicit context exist and machine translation (no matter what kind) is notoriously rubbish there
In the modern scientific funding model, governments (as ~a proxy for the public) decide priorities, and scientists evaluate quality relative to those priorities. We grade ourselves as a community of experts.
In philanthropy, the funder decides priorities, and often also decides quality. This is part of why philanthropy famously leads to science that goes weird or bad directions. It’s also an entry point for bias: a smaller number people make the decision.
Scientists who make cancer drugs can’t make the AI/ML tools to screen grant proposals to develop new cancer drugs. So ultimately the person who controls how proposals are assessed is someone completely outside the ecosystem - often an invisible person who wrote closed-source code.
And, if you’ve seen Grok answering questions about just about anything by talking about white genocide, you see where problem comes in. Different coders will design algorithms with different (but never really *no*) biases and blind spots. But they can never be accountable to scientists.
(1) Letting AI into peer review of any kind puts priorities and rubrics *not* into the hands of a computer, but into the hands of an invisible, unaccountable person
Comments
(1) the person submitting the prompts (only partial control over priorities); and
(2) the person who designed the algorithm (full control over assessment, partial control over priorities in the process)
An AI wouldn’t be able to tell how nonsensical a proposal for gene therapy targeting the nuclei of diseases mature red blood cells would be.
Unless you know biology, that sounds pretty cool.
(1) Letting AI into peer review of any kind puts priorities and rubrics *not* into the hands of a computer, but into the hands of an invisible, unaccountable person