this post was submitted on 13 May 2025
352 points (100.0% liked)
TechTakes
1851 readers
625 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Google used to return helpful results that answered questions without needing to be corrected before it started returning AI slop. So maybe that is true now, but only because the search results are the same AI slop as the AI.
For example, results in stack overflow generally include some discussion about why a solution addressed the issue that provided extra context for why you might use it or do something else instead. AI slop just returns a result which may or may not be correct but it will be presented as a solution without any context.
Google became shit not because of AI but because of SEO.
The enshitification was going on long before OpenAI was even a thing. Remember when we had to add the "reddit" tag just to make sure to get actual results instead of some badly written bloated text?
Google search became shit when they made the guy in charge of ads also in charge of search.
this is actually the correct case - it is both written about (prabhakar raghavan, look him up), and the exact mechanics of how they did it were detailed in documents surfaced in one of the lawsuits that google recently lost (the ones that found they them to be a monopoly)
Ackshually, Google became shit when they started posturing as a for-profit entity. Gather round, comrades, let us sing the internationale
ah yes, the borg deep cuts, iykyk
I bring a sort of biological and technological uniqueness to the collective that the federation doesn’t really like
The funny thing about stack overflow is that the vocal detractors have a kernel of truth to their complaints about elitism, but if you interact with them enough you realize they're often the reason the gate keeping is necessary to keep the quality high.
I used to answer new questions on SO daily a few years back and 50% of all questions are basically unanswerable.
You'd also have the nice September Effect when a semester started and every other question would be someone just copy pasting their homework verbatim and being very surprised we closed it in like a minute.
The thing about that is that literally anyone can answer SO questions. Like try and do that. Pick a language or a tech you're most familiar with, filter that tag and sort by new. Click on every new question. After an hour you'll understand just why most questions have to be closed immediately to keep the site sane.
Whenever I see criticism of SO that's like "oh they'll just close your question for no reason" I can't help but think okay, there's overwhelming chance you're just one of Those and not an innocent casualty of an overeager closer.
I remember in my OS course we were advised to practice good “netiquette” if we were going to go bother the fine folks on stack overflow. Times have changed
Stack overflow resulted in people with highly specialised examples that wouldn't suit your application. It's easier to just ask an AI to write a simple loop for you whenever you forget a bit of syntax
Fun fact, SO is not a place to go to ask for trivial syntax and it's expressly off-topic, because guess what, people answering questions on SO are not your personal fucking google searchers
wow imagine needing to understand the code you’re dealing with and not just copypasting a bunch of shit around
reading documentation and source code must be an excruciating amount of exercise for your poor brain - it has to even do something! poor thing
You've inadvertently pointed out the exact problem: LLM approaches can (unreliably) manage boilerplate and basic stuff but fail at anything more advanced, and by handling the basic stuff they give people false confidence that leads to them submitting slop (that gets rejected) to open source projects. LLMs, as the linked pivot-to-ai post explains, aren't even at the level of occasionally making decent open source contributions.
Man i remember eclipse doing code completion for for loops and other common snippets in like 2005. LLM riders don't even seem to know what tools have been in use for decades and think using an LLM for these things is somehow revolutionary.
Forever in my mind, the guy who said on another post he uses an LLM to convert strings to uppercase when that's literally a builtin command in VSCode, give people cannons and they're start shooting mosquitoes with them every fucking time
the promptfondlers that make their way into our threads sometimes try to brag about how the LLM is the only way to do basic editor tasks, like wrapping symbols in brackets or diffing logs. it’s incredible every time
Promptfondlers 🤣
yep, I came up with
promptfans
(as a reference for describing all the weirdos who do free PR and hype work for this shit), and then @skillsissuer came up withpromptfondlers
for describing those that do this kind of bullshit(and
promptfuckers
has become to collective noun I think of for all of them)I like promptfarmers for the LLM companies and developers. It reflects there attitude of passively hoping that letting their model grow in scale will bring in some future harvest of money.
hmm, I like that!
and then I guess "promptfarmowner" would be saltman etc?
Grain futures salesmen on farms full of plant life (99.5% of which is weeds). ...I don't have a snappy label yet.
artisanal legumist
Air so polluted it makes people sick, but it's all worth it because you can't be arsed to remember the syntax of a for loop.