this post was submitted on 25 Jul 2023
348 points (100.0% liked)

Technology

37443 readers
403 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Over the past one and a half years, Stack Overflow has lost around 50% of its traffic. This decline is similarly reflected in site usage, with approximately a 50% decrease in the number of questions and answers, as well as the number of votes these posts receive.

The charts below show the usage represented by a moving average of 49 days.


What happened?

you are viewing a single comment's thread
view the rest of the comments
[–] bionicjoey@lemmy.ca 26 points 1 year ago (4 children)

Yeah it gives you the answers you ask it to give you. It doesn't matter if they are true or not, only if they look like the thing you're looking for.

[–] magic_lobster_party@kbin.social 13 points 1 year ago (1 children)

An incorrect answer can still be valuable. It can give some hint of where to look next.

[–] thingsiplay@kbin.social 19 points 1 year ago (5 children)

@magic_lobster_party I can't believe someone wrote that. Incorrect answers do more harm than being useful. If the person asks and don't know, how should he or she know it's incorrect and look for a hint?

I don't know about others' experiences, but I've been completely stuck on problems I only figured out how to solve with chatGPT. It's very forgiving when I don't know the name of something I'm trying to do or don't know how to phrase it well, so even if the actual answer is wrong it gives me somewhere to start and clues me in to the terminology to use.

[–] seang96@spgrn.com 8 points 1 year ago

In the context of coding it can be valuable. I produced two tables in a database and asked it to write a query and it did 90% of the job. It was using an incorrect column for a join. If you are doing it for coding you should notice very quickly what is wrong at least if you have experience.

[–] psyspoop@kbin.social 4 points 1 year ago

In my experience, with both coding and natural sciences, a slightly incorrect answer that you attempt to apply, realize is wrong in some way during initial testing/analysis, then you tweak until it's correct, is very useful, especially compared to not receiving any answer or being ridiculed by internet randos.

[–] magic_lobster_party@kbin.social 4 points 1 year ago (1 children)

Google the provided solution for additional sources. Often when I search for solutions to problems I don’t get the right answer directly. Often the provided solution may not even work for me.

But I might find other clues of the problem which can aid me in further research. In the end I finally have all the clues I need to find the answer to my question.

[–] preciouspupp@sopuli.xyz 2 points 1 year ago (1 children)

How do you Google anything when all the results are AI generated crap for generating ad revenue?

Well then I guess I have to survive with ChatGPT if the internet is so riddled with search engine optimized garbage. We’re thankfully not there yet, at least not with computer tech questions.

[–] cat@feddit.it 2 points 1 year ago (1 children)

Well if they refer to coding solution they’re right : sometimes non-working code can lead to a working solution. if you know what you’re doing ofc

[–] FaceDeer@kbin.social 2 points 1 year ago

Even if you don't know what you're doing ChatGPT can still do well if you tell it what went wrong with the suggestion it gave you. It can debug its code or realize that it made wrong assumptions about what you were asking from further context.

[–] QHC@kbin.social 9 points 1 year ago (1 children)

How is that practically different from a user perspective than answers on SO? Either way, I still have to try the suggested solutions to see if they work in my particular situation.

[–] bionicjoey@lemmy.ca 8 points 1 year ago (1 children)

At least with those, you can be reasonably confident that a single person at some point believed in their answer as a coherent solution

[–] FaceDeer@kbin.social 3 points 1 year ago (1 children)

That doesn't exactly inspire confidence.

[–] bionicjoey@lemmy.ca 3 points 1 year ago (1 children)

Better than knowing there's some possibility that the answer was generated purely because the sequence of characters had the highest probability of convincing the reader that it seems correct based on the sequence of characters it was given as input (+/- a decent amount of RNG)

[–] FaceDeer@kbin.social 3 points 1 year ago (1 children)

Still debatable, IMO. Human belief is stubborn and self-justifying whereas an RNG can be rerolled as many times as needed.

[–] bionicjoey@lemmy.ca 2 points 1 year ago (1 children)

Yeah but if you keep rerolling the RNG, how do you know when a right answer gets randomly generated?

Also, my point above was that if a human believed the solution was true, it probably was true at some point. With generative language models, there's no guarantee that there's any logic to what it tells you.

[–] FaceDeer@kbin.social 2 points 1 year ago

You know when the code compiles and does what you want it to do. What's the point in asking for code if you're not going to run it? You'd be doing that with anything you got off of Stack Overflow too, presumably.

[–] Greg@lemmy.ca 8 points 1 year ago (1 children)

What point are you trying to make? LLMs are incredibly useful tools

[–] bionicjoey@lemmy.ca 5 points 1 year ago (2 children)

Yeah for generating prose, not for solving technical problems.

[–] Fingerthief@infosec.pub 4 points 1 year ago

You’ve never actually used them properly then.

[–] Greg@lemmy.ca 1 points 1 year ago* (last edited 1 year ago)

not for solving technical problems

One example is writing complex regex. A simple well written prompt can get you 90% the way there. It's a huge time saver.

for generating prose

It's great a writing boilerplate code so I can spend more of my time architecturing solutions instead of typing.

[–] focus@lemmy.film 2 points 1 year ago

the good thing if it gives you the answer in a programming language is that its quite simple tontestvif the output is what you expect, also a lot of humans hive wrong answers...