this post was submitted on 26 Mar 2025
1585 points (99.7% liked)

Science Memes

14276 readers
2164 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
(page 3) 49 comments
sorted by: hot top controversial new old
[–] [email protected] 51 points 3 weeks ago (1 children)

The peer review process should have caught this, so I would assume these scientific articles aren't published in any worthwhile journals.

load more comments (1 replies)
[–] [email protected] 16 points 3 weeks ago (6 children)
load more comments (6 replies)
[–] [email protected] 122 points 3 weeks ago (4 children)

Guys, can we please call it LLM and not a vague advertising term that changes its meaning on a whim?

[–] [email protected] 16 points 3 weeks ago (2 children)

For some weird reason, I don't see AI amp modelling being advertised despite neural amp modellers exist. However, the very technology that was supposed to replace the guitarists (Suno, etc) are marketed as AI.

[–] [email protected] 4 points 3 weeks ago (1 children)

Is there anything like suno that can be locally hosted?

load more comments (1 replies)
load more comments (1 replies)
load more comments (3 replies)
[–] [email protected] 34 points 3 weeks ago
[–] [email protected] 34 points 3 weeks ago (1 children)

I think you can use vegetative electron microscopy to detect the quantic social engineering of diatomic algae.

[–] [email protected] 13 points 3 weeks ago

My lab doesn't have a retro encabulator for that yet, unfortunately. 😮‍💨

[–] [email protected] 35 points 3 weeks ago

The most disappointing timeline.

[–] [email protected] 18 points 3 weeks ago (1 children)

I thought vegetative electron microscopy was one of the most important procedures in the development of the Rockwell retro encabulator?

[–] [email protected] 5 points 3 weeks ago (1 children)

You're still using rockwell retro encabulators? Need to upgrade to the hyper encabulator as soon as you can. https://www.youtube.com/watch?v=5nKk_-Lvhzo

load more comments (1 replies)
[–] [email protected] 152 points 3 weeks ago (1 children)

Another basic demonstration on why oversight by a human brain is necessary.

A system rooted in pattern recognition that cannot recognize the basic two column format of published and printed research papers

[–] [email protected] 60 points 3 weeks ago (3 children)

To be fair the human brain is a pattern recognition system. it’s just the AI developed thus far is shit

[–] [email protected] 33 points 3 weeks ago (1 children)

The human brain has a pattern recognition system. It is not just a pattern recognition system.

load more comments (1 replies)
[–] [email protected] 29 points 3 weeks ago (2 children)

The LLM systems are pattern recognition without any logic or awareness is the issue. It's pure pattern recognition, so it can easily find some patterns that aren't desired.

load more comments (2 replies)
[–] [email protected] 52 points 3 weeks ago (2 children)

Give it a few billion years.

[–] [email protected] -1 points 3 weeks ago (6 children)

As unpopular as opinion this is, I really think AI could reach human level intelligence in our life time. The human brain is nothing but a computer, so it has to be reproducible. Even if we don’t exactly figure out how are brains work we might be able to create something better.

[–] [email protected] 5 points 3 weeks ago (3 children)

The only way AI is going reach human-level intelligence is if we can actually figure out what happens to information in our brains. No one can really tell if and when that is going to happen.

load more comments (3 replies)
[–] [email protected] 5 points 3 weeks ago

I somewhat agree. Given enough time we can make a machine that does anything a human can do, but some things will take longer than others.

It really depends on what you call human intelligence. Lots of animals have various behaviors that might be called intelligent, like insane target tracking, adaptive pattern recognition, kinematic pathing, and value judgments. These are all things that AI aren't close to doing yet, but that could change quickly.

There are perhaps other things that we take for granted than might end up being quite difficult and necessary, like having two working brains at once, coherent recursive thoughts, massively parallel processing, or something else we don't even know about yet.

I'd give it a 50-50 chance for singularity this century, if development isn't stopped for some reason.

[–] [email protected] 4 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

We would have to direct it in specific directions that we don't understand. Think what a freak accident we REALLY are!

EDIT: I would just copy-paste the human brain in some digital form, modify it so that it is effectively immortal inside the simulation, set simulation speed to * 10.000.000, and let it take it's revenge for being imprisoned into an eternal void of suffering.

load more comments (1 replies)
[–] [email protected] 47 points 3 weeks ago (11 children)

The human brain is not a computer. It was a fun simile to make in the 80s when computers rose in popularity. It stuck in popular culture, but time and time again neuroscientists and psychologists have found that it is a poor metaphor. The more we know about the brain the less it looks like a computer. Pattern recognition is barely a tiny fraction of what the human brain does, not even the most important function, and computers suck at it. No computer is anywhere close to do what a human brain can do in many different ways.

[–] [email protected] 5 points 3 weeks ago (2 children)

Some Scientists are connectiong i/o on brain tissue. These experiments show stunning learning capabilities but their ethics are rightly questioned.

[–] [email protected] 5 points 3 weeks ago (1 children)

I don't get how the ethics of that are questionable. It's not like they're taking brains out of people and using them. It's just cells that are not the same as a human brain. It's like taking skin cells and using those for something. The brain is not just random neurons. It isn't something special and magical.

[–] [email protected] 6 points 3 weeks ago (1 children)

We haven't yet figured out what it means to be conscious. I agree that a person can willingly give permission to be experimented on and even replicated. However there is probably a line where we create something conscious for the act of a few months worth of calculations.

There wouldn't be this many sci-fi books about cloning gone wrong if we already knew all it entails. This is basically the matrix for those brainoids. We are not on the scale of whole brain reproduction but there is a reason for the ethics section on the cerebral organoid wiki page that links to further concerns in the neuro world.

[–] [email protected] 1 points 3 weeks ago (1 children)

Sure, we don't know what makes us sapient or conscious. It isn't a handful of neurons on a tray though. They're significantly less conscious than your computer is.

[–] [email protected] 5 points 3 weeks ago

Maybe I was unclear. I think ethics play a role in research always. That does not mean I want this to stop. I just think we need regulations. Computer-Brain-Interfaces and large brainoids are more than a handful of neurons on a tray. I wouldn't call them human but we all know how fast science can get.

load more comments (1 replies)
load more comments (10 replies)
[–] [email protected] 4 points 3 weeks ago (1 children)

What does “better” mean in that context?

[–] [email protected] 4 points 3 weeks ago

Dankest memes

load more comments (1 replies)
[–] [email protected] 22 points 3 weeks ago (2 children)
load more comments (2 replies)
[–] [email protected] 299 points 3 weeks ago (1 children)

When I was in grad school I mentioned to the department chair that I frequently saw a mis-citation for an important paper in the field. He laughed and said he was responsible for it. He made an error in the 1980s and people copied his citation from the bibliography. He said it was a good guide to people who cited papers without reading them.

[–] [email protected] 68 points 3 weeks ago (9 children)

At university, I faked a paper on economics (not actually my branch of study, but easily to fake) and put it on the shelf in their library. It was filled with nonsense formulas that, if one took the time and actually solved the equations properly, would all produce the same number as a result: 19920401 (year of publication, April Fools Day). I actually got two requests from people who wanted to use my paper as a basis for their thesis.

load more comments (9 replies)
[–] [email protected] 55 points 3 weeks ago (1 children)

Wait how did this lead to 20 papers containing the term? Did all 20 have these two words line up this way? Or something else?

[–] [email protected] 171 points 3 weeks ago (1 children)

AI consumed the original paper, interpreted it as a single combined term, and regurgitated it for researchers too lazy to write their own papers.

[–] [email protected] 177 points 3 weeks ago (4 children)

Hot take: this behavior should get you blacklisted from contributing to any peer-reviewed journal for life. That's repugnant.

[–] [email protected] 84 points 3 weeks ago (2 children)

I don't think it's even a hot take

[–] [email protected] 13 points 3 weeks ago (1 children)

Yeah, this is a hot take: I think it’s totally fine if researchers who have done their studies and collected their data want to use AI as a language tool to bolster their paper. Some researchers legitimately have a hard time communicating, or English is a second language, and would benefit from a pass through AI enhancement, or as a translation tool if they’re more comfortable writing in their native language. However, I am not in favor of submitting it without review of every single word, or using it to synthesize new concepts / farm citations. That’s not research because anybody can do it.

[–] [email protected] 18 points 3 weeks ago (1 children)

It is also a somehow hot take because it kinda puts the burden of systemic misconfiguration on individuals shoulders (oh hey we've seen this before, after and all the time, hashtag (neo)liberalism).

I agree people who did that fucked up. But having your existence as an academic, your job, maybe the only thing you're good at rely on publishing a ton of papers no matter what should be taken into account.

This is a huge problem for science not just since LLM's.

load more comments (1 replies)
[–] [email protected] 48 points 3 weeks ago (1 children)

It's a hot take, but it's also objectively the correct opinion

[–] [email protected] 19 points 3 weeks ago

Unfortunately, the former is rather what should be the case, although so many times it is not:-(.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›