this post was submitted on 16 Jun 2025
56 points (100.0% liked)

SneerClub

1125 readers
76 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
 

"TheFutureIsDesigned" bluechecks thusly:

You: takes 2 hours to read 1 book

Me: take 2 minutes to think of precisely the information I need, write a well-structured query, tell my agent AI to distribute it to the 17 models I've selected to help me with research, who then traverse approximately 1 million books, extract 17 different versions of the information I'm looking for, which my overseer agent then reviews, eliminates duplicate points, highlights purely conflicting ones for my review, and creates a 3-level summary.

And then I drink coffee for 58 minutes.

We are not the same.

For bonus points:

I want to live in the world of Hyperion, Ringworld, Foundation, and Dune.

You know, Dune.

(Via)

you are viewing a single comment's thread
view the rest of the comments
[–] swlabr@awful.systems 8 points 23 hours ago* (last edited 23 hours ago) (1 children)

take 2 minutes to think of precisely the information I need

I can’t even put into words the full nonsense of this statement. How do you think this would work? This is not how learning works. This is not how research works. This is not how anything works.

This part threw me as well. If you can think of it, why read for it? Didn’t make sense and so I stopped looking into this particular abyss until you pointed it out again.

I think the only interpretation of what this person said that approaches some level of rationality on their part is essentially a form of confirmation bias. They aren’t thinking of information that is in the text, they are thinking “I want this text to confirm X for me”, then they prompt and get what they want. LLMs are biased to be people-pleasers and will happily spin whatever hallucinated tokens the user throws at them. That’s my best guess.

That you didn’t think of the above just goes to show the failure of your unfeeble mind’s logic and reason to divine such a truth. Just kidding, sorta, in the sense that you can’t expect to understand an irrational thought process using rationality.

But if it’s not that I’m still thrown.

[–] HedyL@awful.systems 8 points 22 hours ago (1 children)

They aren’t thinking of information that is in the text, they are thinking “I want this text to confirm X for me”, then they prompt and get what they want.

I think it's either that, or they want an answer they could impress other people with (without necessarily understanding it themselves).

[–] swlabr@awful.systems 7 points 22 hours ago

Oh, that's a good angle too. Prompt the LLM with "what insights does this book have about B2B sales" or something.