this post was submitted on 23 May 2024
1051 points (99.1% liked)

Science Memes

11004 readers
3123 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
1051
Name & shame. :) (mander.xyz)
submitted 5 months ago* (last edited 5 months ago) by [email protected] to c/[email protected]
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 191 points 5 months ago (2 children)

Dude. Couldn't even proofread the easy way out they took

[–] [email protected] 27 points 5 months ago (2 children)

This is what baffles me about these papers. Assuming the authors are actually real people, these AI-generated mistakes in publications should be pretty easy to catch and edit.

It does make you wonder how many people are successfully putting AI-generated garbage out there if they're careful enough to remove obviously AI-generated sentences.

[–] [email protected] 1 points 5 months ago

I've heard the word "delve" has suddenly become a lot more popular in some fields

[–] [email protected] 7 points 5 months ago* (last edited 5 months ago) (1 children)

I definitely utilize AI to assist me in writing papers/essays, but never to just write the whole thing.

Mainly use it for structuring or rewording sections to flow better or sound more professional, and always go back to proofread and ensure that any information stays correct.

Basically, I provide any data/research and get a rough layout down, and then use AI to speed up the refining process.

EDIT: I should note that I am not writing scientific papers using this method, and doing so is probably a bad idea.

[–] [email protected] 5 points 5 months ago* (last edited 5 months ago) (1 children)

There's perfectly ethical ways to use it, even for papers, as your example fits. It's been a great help for my adhd ass to get some structure in my writing.

https://www.oneusefulthing.org/p/my-class-required-ai-heres-what-ive

[–] [email protected] 5 points 5 months ago

Yeah, same. I’m good at getting my info together and putting my main points down, but structuring everything in a way that flows well just isn’t my strong suit, and I struggle to sit there for long periods of time writing something I could just explain in a few short points, especially if there’s an expectation for a certain length.

AI tools help me to get all that done whilst still keeping any core information my own.

[–] [email protected] 106 points 5 months ago (1 children)

This almost makes me think they're trying to fully automate their publishing process. So, no editor in that case.

Editors are expensive.

[–] [email protected] 19 points 5 months ago (1 children)

If they really want to do it, they can just run a local language model trained to proofread stuff like this. Would be way better

[–] [email protected] 12 points 5 months ago (1 children)

This is exactly the line of thinking that lead to papers like this being generated.

[–] [email protected] 1 points 5 months ago (2 children)

I don't think so. They are using AI from a 3rd party. If they train their own specialized version, things will be better.

[–] [email protected] 1 points 5 months ago

That's not necessarily true. General-purpose 3rd party models (chatgpt, llama3-70b, etc) perform surprisingly good in very specific tasks. While training or finetuning your specialized model should indeed give you better results, the crazy amount of computational resources and specialized manpower needed to accomplish it makes it unfeasible and unpractical in many applications. If you can get away with an occational "as an AI model...", you are better off using existing models.

[–] [email protected] 11 points 5 months ago (1 children)

Here is a better idea: have some academic integrity and actually do the work instead of using incompetent machine learning to flood the industry with inaccurate trash papers whose only real impact is getting in the way of real research.

[–] [email protected] 3 points 5 months ago (2 children)

There is nothing wrong with using AI to proofread a paper. It's just a grammar checker but better.

[–] [email protected] 4 points 5 months ago (1 children)

You can literally use tools to check grammar perfectly without using AI. What the LLM AI does is it predict what word comes next in a sequence, and if the AI is wrong as it often is then you've just attempted to publish a paper with halucinations wasting the time and effort of so many people because you're greedy and lazy.

[–] [email protected] 2 points 5 months ago (1 children)

AI does better at checking for grammar and clarity of message. It's just a fact. I've made comparisons myself using a grammar checker on an essay vs AI and AI corrected it and made it much better.

[–] [email protected] -3 points 5 months ago (1 children)

AI doesn't do anything better than a human being. Human Beings are the training data, an AI that mimics it 98% is still less accurate than the humans. If you suck so much at writing papers then you're just below average as a human being who writes papers and using tools will never remedy that without introspection and a desire to improve.

[–] [email protected] 2 points 5 months ago

You said that "you can literally use tools to check grammar perfectly" I've responded to that claim. No mention of humans. You seem to be projecting

[–] [email protected] 4 points 5 months ago* (last edited 5 months ago) (1 children)

Proofreading involves more than just checking grammar, and AIs aren't perfect. I would never put my name on something to get published publicly like this without reading it through at least once myself.

[–] [email protected] 4 points 5 months ago

I entirely agree. You should read through something you'll publish.