The peer review process should have caught this, so I would assume these scientific articles aren't published in any worthwhile journals.
Science Memes
Welcome to c/science_memes @ Mander.xyz!
A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.
Rules
- Don't throw mud. Behave like an intellectual and remember the human.
- Keep it rooted (on topic).
- No spam.
- Infographics welcome, get schooled.
This is a science community. We use the Dawkins definition of meme.
Research Committee
Other Mander Communities
Science and Research
Biology and Life Sciences
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- !reptiles and [email protected]
Physical Sciences
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
Humanities and Social Sciences
Practical and Applied Sciences
- !exercise-and [email protected]
- [email protected]
- !self [email protected]
- [email protected]
- [email protected]
- [email protected]
Memes
Miscellaneous
Guys, can we please call it LLM and not a vague advertising term that changes its meaning on a whim?
For some weird reason, I don't see AI amp modelling being advertised despite neural amp modellers exist. However, the very technology that was supposed to replace the guitarists (Suno, etc) are marketed as AI.
I think you can use vegetative electron microscopy to detect the quantic social engineering of diatomic algae.
My lab doesn't have a retro encabulator for that yet, unfortunately. 😮💨
The most disappointing timeline.
I thought vegetative electron microscopy was one of the most important procedures in the development of the Rockwell retro encabulator?
You're still using rockwell retro encabulators? Need to upgrade to the hyper encabulator as soon as you can. https://www.youtube.com/watch?v=5nKk_-Lvhzo
Another basic demonstration on why oversight by a human brain is necessary.
A system rooted in pattern recognition that cannot recognize the basic two column format of published and printed research papers
To be fair the human brain is a pattern recognition system. it’s just the AI developed thus far is shit
The human brain has a pattern recognition system. It is not just a pattern recognition system.
The LLM systems are pattern recognition without any logic or awareness is the issue. It's pure pattern recognition, so it can easily find some patterns that aren't desired.
Give it a few billion years.
As unpopular as opinion this is, I really think AI could reach human level intelligence in our life time. The human brain is nothing but a computer, so it has to be reproducible. Even if we don’t exactly figure out how are brains work we might be able to create something better.
The only way AI is going reach human-level intelligence is if we can actually figure out what happens to information in our brains. No one can really tell if and when that is going to happen.
I somewhat agree. Given enough time we can make a machine that does anything a human can do, but some things will take longer than others.
It really depends on what you call human intelligence. Lots of animals have various behaviors that might be called intelligent, like insane target tracking, adaptive pattern recognition, kinematic pathing, and value judgments. These are all things that AI aren't close to doing yet, but that could change quickly.
There are perhaps other things that we take for granted than might end up being quite difficult and necessary, like having two working brains at once, coherent recursive thoughts, massively parallel processing, or something else we don't even know about yet.
I'd give it a 50-50 chance for singularity this century, if development isn't stopped for some reason.
We would have to direct it in specific directions that we don't understand. Think what a freak accident we REALLY are!
EDIT: I would just copy-paste the human brain in some digital form, modify it so that it is effectively immortal inside the simulation, set simulation speed to * 10.000.000, and let it take it's revenge for being imprisoned into an eternal void of suffering.
The human brain is not a computer. It was a fun simile to make in the 80s when computers rose in popularity. It stuck in popular culture, but time and time again neuroscientists and psychologists have found that it is a poor metaphor. The more we know about the brain the less it looks like a computer. Pattern recognition is barely a tiny fraction of what the human brain does, not even the most important function, and computers suck at it. No computer is anywhere close to do what a human brain can do in many different ways.
Some Scientists are connectiong i/o on brain tissue. These experiments show stunning learning capabilities but their ethics are rightly questioned.
I don't get how the ethics of that are questionable. It's not like they're taking brains out of people and using them. It's just cells that are not the same as a human brain. It's like taking skin cells and using those for something. The brain is not just random neurons. It isn't something special and magical.
We haven't yet figured out what it means to be conscious. I agree that a person can willingly give permission to be experimented on and even replicated. However there is probably a line where we create something conscious for the act of a few months worth of calculations.
There wouldn't be this many sci-fi books about cloning gone wrong if we already knew all it entails. This is basically the matrix for those brainoids. We are not on the scale of whole brain reproduction but there is a reason for the ethics section on the cerebral organoid wiki page that links to further concerns in the neuro world.
Sure, we don't know what makes us sapient or conscious. It isn't a handful of neurons on a tray though. They're significantly less conscious than your computer is.
Maybe I was unclear. I think ethics play a role in research always. That does not mean I want this to stop. I just think we need regulations. Computer-Brain-Interfaces and large brainoids are more than a handful of neurons on a tray. I wouldn't call them human but we all know how fast science can get.
What does “better” mean in that context?
Dankest memes
When I was in grad school I mentioned to the department chair that I frequently saw a mis-citation for an important paper in the field. He laughed and said he was responsible for it. He made an error in the 1980s and people copied his citation from the bibliography. He said it was a good guide to people who cited papers without reading them.
At university, I faked a paper on economics (not actually my branch of study, but easily to fake) and put it on the shelf in their library. It was filled with nonsense formulas that, if one took the time and actually solved the equations properly, would all produce the same number as a result: 19920401 (year of publication, April Fools Day). I actually got two requests from people who wanted to use my paper as a basis for their thesis.
Wait how did this lead to 20 papers containing the term? Did all 20 have these two words line up this way? Or something else?
AI consumed the original paper, interpreted it as a single combined term, and regurgitated it for researchers too lazy to write their own papers.
Hot take: this behavior should get you blacklisted from contributing to any peer-reviewed journal for life. That's repugnant.
I don't think it's even a hot take
Yeah, this is a hot take: I think it’s totally fine if researchers who have done their studies and collected their data want to use AI as a language tool to bolster their paper. Some researchers legitimately have a hard time communicating, or English is a second language, and would benefit from a pass through AI enhancement, or as a translation tool if they’re more comfortable writing in their native language. However, I am not in favor of submitting it without review of every single word, or using it to synthesize new concepts / farm citations. That’s not research because anybody can do it.
It is also a somehow hot take because it kinda puts the burden of systemic misconfiguration on individuals shoulders (oh hey we've seen this before, after and all the time, hashtag (neo)liberalism).
I agree people who did that fucked up. But having your existence as an academic, your job, maybe the only thing you're good at rely on publishing a ton of papers no matter what should be taken into account.
This is a huge problem for science not just since LLM's.
It's a hot take, but it's also objectively the correct opinion
Unfortunately, the former is rather what should be the case, although so many times it is not:-(.