this post was submitted on 10 Aug 2025
845 points (97.8% liked)
Comic Strips
18756 readers
2068 users here now
Comic Strips is a community for those who love comic stories.
The rules are simple:
- The post can be a single image, an image gallery, or a link to a specific comic hosted on another site (the author's website, for instance).
- The comic must be a complete story.
- If it is an external link, it must be to a specific story, not to the root of the site.
- You may post comics from others or your own.
- If you are posting a comic of your own, a maximum of one per week is allowed (I know, your comics are great, but this rule helps avoid spam).
- The comic can be in any language, but if it's not in English, OP must include an English translation in the post's 'body' field (note: you don't need to select a specific language when posting a comic).
- Politeness.
- AI-generated comics aren't allowed.
- Adult content is not allowed. This community aims to be fun for people of all ages.
Web of links
- !linuxmemes@lemmy.world: "I use Arch btw"
- !memes@lemmy.world: memes (you don't say!)
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Researchers who already thought of all this and it's in the study: -_-
Tbf, research based on a survey is much less valuable than a double blind randomized study
You might need a larger sample, and sometimes a blind study is just not possible.
Even then, the error bars are usually huge. If we're talking about cigarette smoke causing lung cancer (which has a relative risk increase of 10,000%) then those error bars aren't an issue. But if you're surveying people for their diet over the past 30 years to connect to colon cancer and you gey a relative risk increase of ~5℅ then the whole thing should be thrown out because the error bars are more like +- 100%
Thus the larger sample, to get something statistically significant. Which might not be practical due to cost.
Some methods suck no matter how much data you throw at it.
The study I was referencing had thousands of people taking their survey and the data quality was terrible because that's what you get when asking people to recall what they ate over the past 20-30 years. Adding yet more people to the study won't clean up the data and would start adding enough cost that it'd be cheaper to do close observation studies of 100 people and woupd actually achieve usable results.
The general guidelines on epidemiological studies (which both of my examples are) is that you cannot draw conclusions from a relative risk increase less than 100%.
So please stop with the blanket statement of "more data means better results". It's not true, and it's the same claim that AI tech bros keep making to fleece gullible investors
More data does mean better results.
So when I can't get a useful trendline on a graph of % of redheads born per number of bananas eaten by the mother, you're saying it's because I didn't collect enough data? Why didn't I think of that?
No trend is also a result, more data, more confidence.
A lot of mass media is full of bullshit, and people showing skepticism, asking further questions, and wanting second opinions is generally a good, healthy response. Particularly in an era of Dr. Oz professional bullshit and blaring "Head On, Apply Directly To The Forehead!" style advertisements.