this post was submitted on 05 May 2024
27 points (100.0% liked)

TechTakes

1435 readers
118 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post, there’s no quota for posting and the bar really isn’t that high

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

you are viewing a single comment's thread
view the rest of the comments
[–] sailor_sega_saturn@awful.systems 9 points 6 months ago (2 children)

Orange Site denizen plays Dr. LLM: https://news.ycombinator.com/item?id=40331850

Show NH [sic]: "data-to-paper" - autonomous stepwise LLM-driven research

data-to-paper is a framework for systematically navigating the power of AI to perform complete end-to-end scientific research, starting from raw data and concluding with comprehensive, transparent, and human-verifiable scientific papers

The example "research paper" was some useless fluff about diabetes, based off an existing data set (read: actual work produced by actual humans), and mad-libs.

The study identifies an inverse correlation between physical activity and fruit and vegetable intake with diabetes occurrence, while higher BMI is positively correlated

I'm too sleepy and statistics-impaired to check how nonsensical the regression "analysis" or findings are, so instead let's check out the references (read: the actual humans who were plagarized to make this fluff)!

Reference #5

[5] T. Schnurr, Hermina Jakupovi, Germn D. Carrasquilla, L. ngquist, N. Grarup, T. Srensen, A. Tjnneland, K. Overvad, O. Pedersen, T. Hansen, and T. Kilpelinen. Obesity, unfavourable lifestyle and genetic risk of type 2 diabetes: a case-cohort study. Diabetologia, 63:1324–1332, 2020.

This incredibly managed to mangle all non-English alphabet names:

Hermina Jakupović, Germán D. Carrasquilla, Lars Ängquist, Thorkild I. A. Sørensen, Anne Tjønneland, Tuomas O. Kilpeläinen

I guess AI has an easier time advancing science than producing a PDF with non-ascii text in it

[–] V0ldek@awful.systems 8 points 6 months ago (1 children)

The fact that actual engineers have been trying to educate newcomers on Unicode for at least 20 years and not only is it still pervasively ignored but the hottest, newest, cutting edge AI that Will Change Everything™ with billions of dollars and so many manhours behind it gets absolutely dumbfounded when it sees é is the exact combination of funny and sad that will eventually result in me turning into a Butlerian Jihad Joker.

[–] sailor_sega_saturn@awful.systems 9 points 6 months ago

Every respectable programming language has functionality in its standard library that recognises letter characters

As a C++ programmer I've never been so offended by something I so entirely agree with.

[–] froztbyte@awful.systems 2 points 6 months ago (1 children)

This incredibly managed to mangle all non-English alphabet names:

hmm. I can guess at a few reasons this could be happening: model coders "normalizing" everything to flat-ascii in training, or similar happening at training stage (because of the previously-referenced RLHF datamills employing only people with specific localized dialects, instead of wider context-local languages), etc.

wonder if this particular thing is a confluence of those, or just one specific set

[–] dgerard@awful.systems 2 points 6 months ago (2 children)

have you ever met an English-native dev who didn't need to be trained out of the world being 7-bit ascii

[–] jonhendry@iosdev.space 2 points 6 months ago (2 children)

@dgerard

7 bits were good enough for Jesus.

[–] zogwarg@awful.systems 3 points 6 months ago

First efforts at bible digitization seems incredibly poorly documented online, and from a casual inspection in google scholar, not very well referenced. It's a pity it sounds like a fascinating topic, though 7 bits is likely for the first english versions yes (And according to this there are horrid 7-bits encodings for the ancient greek)

[–] BurgersMcSlopshot@awful.systems 2 points 6 months ago

My Jesus wanted characters for drawing borders and playing card suits, which is why He handed down to us Code Page 437. Using the upper 128 characters for things like vowels with funny marks on them is catholic heresy (nuts to Latin 1, down with Unicode).

[–] froztbyte@awful.systems 2 points 6 months ago* (last edited 6 months ago)

I got lucky and largely missed out on having to deal with those, at a guess largely because of location and age. the type I got to deal with instead were the php-/perl-brained "everything is just a string" types

hell, (circa 2010) I had beers with someone once who was really into Tcl and Second Life, and wanted to be uploaded as a digital consciousness. way before I knew about the other nutjobs, but in retrospect I now have a couple other questions I might've wanted to ask at the time...