no human alive could possibly ever know such a thing, that requires the very smart programming of the almighty Artificial Intelligence coded by the Biggest Brains
it costs zero money to simply re read and proof your paper before submitting but what do I know, I lived in the before fore times of no "AI" that was *checks notes* pre 2022
I'm finding 29 results since 2020 with "certainly here is a" and without "chatgpt," "chat gpt," or "generative ai." Many appear to be unpublished and at least eight are legit uses of the phrase. The possibly non-legit ones have a total of one (1) citation, so we're not at the tipping point yet.
every time i ask it to find citations it fails in spectacular ways. it often points me in the right direction, but directly extracting text from an LLM output is immensely stupid.
Lol, but here's a thing: was going thro engl 2nd lang speaker's paper. The LR was horribly tortuous. Offspring put it thro the AI they use for work. Several iterations later, it spat out a version that did help me make productive suggestions. The questions/instructions you give it are so important.
It’s clearly a tool that has some good use cases, as long as you’re prepared to treat its results as something to be evaluated carefully before incorporating it into your work.
Just pasting its answer suggests that this was the extent of the literature review in this (hopefully undergrad) paper.
I'm more than a little scared that this really only catches the papers whose authors didn't even bother to read their own work before submission.
I guess this is beneficial because it reinforces applied skepticism when reading science papers?
It’s a little understandable that hungover undergrads aren’t reading their outputs before they submit them but submitting to a journal while too lazy to do a cleanup edit is something else
Place the divergence point when you like (2020, 2016, 2001, 1980, the Reformation) but it’s hard to fight the feeling that we’re living in a dud timeline.
A coworker of mine, at a large corporation I won't name, tried submitting an abstract for a scientific paper that they ran through chat GPT. I already knew they wouldn't write the way that it looked, and copyleaks detected it, but I know they'll try it again once I'm not looking. Hell world.
Part of that integration is knowing how LLMs work so that we can SPOT this sort of lazy work and teach students to not just cut and paste the output window.
Reminds me of how a grad school friend blew up while watching a TV interview of a techbro former classmate who said he was too cool to finish undergrad, & this guy had not only purchased his senior thesis from an online mill but *left the payment receipt tucked inside* when he turned it in. 🙃 🙃
Not related, but the font used on the title page of the paper where that first instance appears is absolutely awesome -- I've never seen a Cyrillic font transliterated into English characters before.
Comments
Students: How do you know I didn't write this myself?!
Teachers, having seen their crappy writing all semester long: *long stare*
That profs are doing it? *sigh*
as
eleventeen fingers :: picture AI
Just pasting its answer suggests that this was the extent of the literature review in this (hopefully undergrad) paper.
I guess this is beneficial because it reinforces applied skepticism when reading science papers?
Oh plagiarism how do I spell thy name?
people dont even try to look legit, they're too lazy to even cheat properly
The garbage started long before September 1st 1993. Today is September 11152 1993 in case you didn't know.
OMG
https://en.wikipedia.org/wiki/Ge_with_stroke
https://www.scimagojr.com/journalsearch.php?q=21100788797&tip=sid&clean=0