this post was submitted on 29 Aug 2024
229 points (96.7% liked)

Science Memes

10842 readers
2289 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.


Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
top 10 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 2 months ago

I hate statistical rigour, I want everything to be vibes-based instead

[–] [email protected] 0 points 2 months ago

That old man is a slave owner

[–] [email protected] 9 points 2 months ago

If the p is low, drop the h0

[–] [email protected] 7 points 2 months ago (2 children)
[–] [email protected] 29 points 2 months ago (1 children)

The "p value" is a number, calculated from a statistical test, that describes how likely you are to have found a particular set of observations if the null hypothesis were true.

P values are used in hypothesis testing to help decide whether to reject the null hypothesis. The smaller the *p *value, the more likely you are to reject the null hypothesis.

[–] [email protected] 25 points 2 months ago (2 children)

Adding onto this. p < 0.05 is the somewhat arbitrary standard that many journals have for being able to publish a result at all.

Is you do an experiment to see we whether X affects Y, and get a p = 0.05, you can say, "Either X affects Y, or it doesn't and an unlikely fluke event occurred during this experiment that had a 1 in 20 chance."

Usually, this kind of thing is publishable, but we've decided we don't want to read the paper if that number gets any higher than 1 in 20. No one wants to read the article on, "We failed to determine whether X has an effect on Y or not."

[–] [email protected] 11 points 2 months ago

Which is sad because a lot of science is just ruling things out. We should still publish papers that say that if we do an experiment with too small of a sample, we get an inconclusive result, because that starts to put bounds on how strongly a thing gets affected, if an effect occurs at all.

[–] [email protected] 6 points 2 months ago (1 children)

That's a shame. Negative results are very important to the process.

[–] [email protected] 2 points 2 months ago

Especially considering that PDFs can be just a few Mb, and I doubt people will care if they're not cached locally.

[–] [email protected] 5 points 2 months ago