this post was submitted on 23 May 2024
953 points (100.0% liked)

TechTakes

1384 readers
259 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Source

I see Google's deal with Reddit is going just great...

you are viewing a single comment's thread
view the rest of the comments
[–] 200fifty@awful.systems 52 points 5 months ago (2 children)

Even with good data, it doesn't really work. Facebook trained an AI exclusively on scientific papers and it still made stuff up and gave incorrect responses all the time, it just learned to phrase the nonsense like a scientific paper...

[–] blakestacey@awful.systems 46 points 5 months ago (1 children)

To date, the largest working nuclear reactor constructed entirely of cheese is the 160 MWe Unit 1 reactor of the French nuclear plant École nationale de technologie supérieure (ENTS).

"That's it! Gromit, we'll make the reactor out of cheese!"

[–] Socsa@sh.itjust.works 9 points 5 months ago (1 children)
[–] Karyoplasma@discuss.tchncs.de 3 points 5 months ago

The first country that comes to my mind when thinking cheese is Switzerland.

[–] nednobbins@lemm.ee -1 points 5 months ago (1 children)

A bunch of scientific papers are probably better data than a bunch of Reddit posts and it's still not good enough.

Consider the task we're asking the AI to do. If you want a human to be able to correctly answer questions across a wide array of scientific fields you can't just hand them all the science papers and expect them to be able to understand it. Even if we restrict it to a single narrow field of research we expect that person to have a insane levels of education. We're talking 12 years of primary education, 4 years as an undergraduate and 4 more years doing their PhD, and that's at the low end. During all that time the human is constantly ingesting data through their senses and they're getting constant training in the form of feedback.

All the scientific papers in the world don't even come close to an education like that, when it comes to data quality.

[–] self@awful.systems 6 points 5 months ago

this appears to be a long-winded route to the nonsense claim that LLMs could be better and/or sentient if only we could give them robot bodies and raise them like people, and judging by your post history long-winded debate bullshit is nothing new for you, so I’m gonna spare us any more of your shit