this post was submitted on 04 Sep 2024
262 points (100.0% liked)

TechTakes

1276 readers
104 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] swlabr@awful.systems 86 points 2 weeks ago

LLMs, and everyone who uses them to process information:

[–] hex@programming.dev 63 points 2 weeks ago (1 children)

Facts are not a data type for LLMs

I kind of like this because it highlights the way LLMs operate kind of blind and drunk, they're just really good at predicting the next word.

[–] CleoTheWizard@lemmy.world 28 points 2 weeks ago (1 children)

They’re not good at predicting the next word, they’re good at predicting the next common word while excluding most unique choices.

What results is essentially if you made a Venn diagram of human language and only ever used the center of it.

[–] hex@programming.dev 15 points 2 weeks ago

Yes, thanks for clarifying what I meant! AI will never create anything unique unless prompted uniquely and even then it will tend to revert back to what you expect most.

[–] swlabr@awful.systems 47 points 2 weeks ago (1 children)

ATTN: If you're coming into this thread to say, "The output of AI is bad because your prompts suck," I'm just proud that you managed to figure out how to use the internet at all. Good job, you!

[–] froztbyte@awful.systems 14 points 2 weeks ago

remember remember, eternal september

(not that I much agree with the classist overtones of the original, but fuck me does it come to mind often)

[–] Sibbo@sopuli.xyz 31 points 2 weeks ago (1 children)

Well, to be fair, AI can do it in seconds. Which beats humans.

But if that is relevant if the results are worthless is another question.

[–] HubertManne@moist.catsweat.com 12 points 2 weeks ago (3 children)

Yeah it changes the task from note taking or summarizing to proofreading.

load more comments (3 replies)
[–] kbal@fedia.io 22 points 2 weeks ago

Made strange choices about what to highlight.

They certainly do. For a while it was common to see AI-generated summaries under links to articles on lemmy, so I got a feel for them. Seems to me you would not need any fancy artificial intelligence to do equally well: Just take random excerpts, or maybe just read every third sentence.

[–] dgerard@awful.systems 21 points 2 weeks ago (3 children)

how the hell did this of all the posts turn into a promptfondler shooting gallery

[–] froztbyte@awful.systems 11 points 2 weeks ago

1.26K subscribers

load more comments (2 replies)
[–] dgerard@awful.systems 19 points 2 weeks ago

i have seen the light from the helpful posters here, made up bullshit alleged summaries of documents are great actually

[–] GBU_28@lemm.ee 15 points 2 weeks ago (3 children)

Dang everyone here needs to look at a tree or a cat or something. Energy is wack in here

[–] dgerard@awful.systems 29 points 2 weeks ago (1 children)

I just went outside and appreciated the rendering

[–] GBU_28@lemm.ee 10 points 2 weeks ago (3 children)

Pretty nice right? I did the trees and cats.

load more comments (3 replies)
load more comments (2 replies)
[–] khalid_salad@awful.systems 11 points 2 weeks ago

Could it be because a statistical relation isn't the same as a semantic one? No, I must be prompting it wrong. I'll just add "engineer" to my title and then everyone will take me seriously.

load more comments
view more: next ›