this post was submitted on 10 Aug 2025
845 points (97.8% liked)

Comic Strips

18756 readers
2068 users here now

Comic Strips is a community for those who love comic stories.

The rules are simple:

Web of links

founded 2 years ago
MODERATORS
 

[old scientist, pointing at some data] After decades of research, thousands of experiments, a massive amount of peer reviewing, we can finally confidently conclude...

[smug dude with a ridiculous hairstyle] Uh yeah, but this TikTok by PatriotEagle1776 says your research is wrong

https://thebad.website/comic/disproven

https://bsky.app/profile/thebad.website

you are viewing a single comment's thread
view the rest of the comments
[–] blanketswithsmallpox@lemmy.world 31 points 6 days ago* (last edited 6 days ago) (3 children)

Did they account for X, Y, Z?

What about all my personal anecodotes!

OMG this was just a survey? HOW IN THE WORLD COULD YOU EVER TELL IF SOMEONE LIED!?

Researchers who already thought of all this and it's in the study: -_-

[–] MBM@lemmings.world 1 points 2 days ago

Lol look at that sample size, only 1000 people

[–] kameecoding@lemmy.world 7 points 6 days ago (1 children)

Tbf, research based on a survey is much less valuable than a double blind randomized study

[–] Tja@programming.dev 6 points 6 days ago (1 children)

You might need a larger sample, and sometimes a blind study is just not possible.

[–] mnemonicmonkeys@sh.itjust.works 1 points 6 days ago (1 children)

Even then, the error bars are usually huge. If we're talking about cigarette smoke causing lung cancer (which has a relative risk increase of 10,000%) then those error bars aren't an issue. But if you're surveying people for their diet over the past 30 years to connect to colon cancer and you gey a relative risk increase of ~5℅ then the whole thing should be thrown out because the error bars are more like +- 100%

[–] Tja@programming.dev 0 points 6 days ago (1 children)

Thus the larger sample, to get something statistically significant. Which might not be practical due to cost.

[–] mnemonicmonkeys@sh.itjust.works 0 points 5 days ago (1 children)

Some methods suck no matter how much data you throw at it.

The study I was referencing had thousands of people taking their survey and the data quality was terrible because that's what you get when asking people to recall what they ate over the past 20-30 years. Adding yet more people to the study won't clean up the data and would start adding enough cost that it'd be cheaper to do close observation studies of 100 people and woupd actually achieve usable results.

The general guidelines on epidemiological studies (which both of my examples are) is that you cannot draw conclusions from a relative risk increase less than 100%.

So please stop with the blanket statement of "more data means better results". It's not true, and it's the same claim that AI tech bros keep making to fleece gullible investors

[–] Tja@programming.dev 0 points 5 days ago (1 children)

More data does mean better results.

[–] mnemonicmonkeys@sh.itjust.works 1 points 5 days ago (1 children)

More data does mean better results.

So when I can't get a useful trendline on a graph of % of redheads born per number of bananas eaten by the mother, you're saying it's because I didn't collect enough data? Why didn't I think of that?

[–] Tja@programming.dev 1 points 5 days ago

No trend is also a result, more data, more confidence.

[–] UnderpantsWeevil@lemmy.world 2 points 5 days ago

A lot of mass media is full of bullshit, and people showing skepticism, asking further questions, and wanting second opinions is generally a good, healthy response. Particularly in an era of Dr. Oz professional bullshit and blaring "Head On, Apply Directly To The Forehead!" style advertisements.