this post was submitted on 15 Jul 2023
530 points (100.0% liked)

Technology

59680 readers
3235 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

ChatGPT use declines as users complain about ‘dumber’ answers, and the reason might be AI’s biggest threat for the future::AI for the smart guy?

(page 2) 50 comments
sorted by: hot top controversial new old
[–] designated_fridge@lemmy.world 7 points 1 year ago (1 children)

The people who complain about how they no longer can get answers on how to eliminate juice in the style of Hitler are people who are - to be honest - completely missing the point of this revolution.

ChatGPT is the biggest developer productivity booster I have ever seen and I spend so much more time writing valuable code. Less time spent debugging, less time spent reviewing, etc. means more time for development of things that matter.

Each tech company who just saw massive growth over the past 10-15 years have just received a new toy which will multiply their developer's outputs. There will be a clear difference between companies who manage to do this will and those who won't.

It's irrelevant if I can get ChatGPT to write a poem about poop or not. That's not the goal of this tool.

load more comments (1 replies)
[–] Immersive_Matthew@sh.itjust.works 7 points 1 year ago (1 children)

I had my first WTF moment with AI today. I use the paid Chat-GPT+ to help me with my c# in Unity. It has been a struggle to use, even with the smaller basic scripts you can paste into its character limited prompt, as they often have compile errors. That said if you keep feeding it the errors, guide it where it is making mistakes in design, logic etc. it can often produce a working script about 60-70% of the time. It takes a fair amount of time quite often to get to that working script but the code that finally works is good.

Today I was asking it to edit a large c# script with 1 small change that meant lots of repetitive edits and references. Perfect for AI, however ChatGPT+ really struggled on this one which was a surprise. We went round and round with edits and ultimately more and more errors appeared in the console. It often ends up in these never ending coding edit loops to fix the next set errors from the last corrected script. We are taking 3 hours of this with ChatGPT+ finally saying that it needs to be able to see more of my project which of course it cannot due to many of its input limitations including number of characters so that is often when I give up. That is the 30-40% that do not work out. Real bummer as I invest so much time for no results.

It was at the movement so gave up today that a YouTube notification popped up about how Claude.ai is even better than ChatGPT so I gave it the initial prompt that I gave ChatGPT above and it got the code right the first time. WOW!!!

Only issue was it would stop spitting out code every 300 or so lines (unsure what the character limit is). To get around this I just asked if it could give me the code from line 301 onwards until I had the full script.

Unsure if this one situation confirms coding with Claude.ai is better than ChatGPT+, but it certainly has my attention and I will be using it more this week as maybe that $20/month for ChatGPT+ no longer makes sense. Claude is free with no plans for a premium service it said. Unsure if this is true as I have not spent anytime investing it yet, but I will be.

[–] foggy@lemmy.world 5 points 1 year ago

I had a similar use case.

I need it to alphabetize a list for me, only I need it to alphabetize the inner, non HTML elements. simplified, but like:

It would get like 5 or 6 in alphabetical order and then just fuck it all up.

[–] glockenspiel@lemmy.world 6 points 1 year ago* (last edited 1 year ago)

Surely the rampant server issues are a big part of that.

OpenAI have been shitting the bed over the last 2 weeks with constant technical issues during the workday for the web front end.

[–] nottheengineer@feddit.de 6 points 1 year ago (1 children)

It definitely got more stupid. I stopped paying for plus because the current GPT4 isn't much better than the old GPT3.5.

If you check downdetector.com, it's obvious why they did this. Their infrastructure just couldn't keep up with the full size models.

I think I'll get myself a proper GPU so I can run my own LLMs without worrying that they could stop working for my use case.

[–] anlumo@feddit.de 3 points 1 year ago (1 children)

GPT4 needs a cluster of around 100 server-grade GPUs that are more than 20k each, I don’t think you have that lying around at home.

[–] nottheengineer@feddit.de 3 points 1 year ago

I don't, but a consumer card with 24GB of VRAM can run a model that's about as powerful as the current GPT3.5 in some use cases.

And you can rent some of that server-grade hardware for a short time to do fine-tuning, which lets you surpass even GPT4 in some niches.

[–] toasteranimation@lemmy.world 6 points 1 year ago* (last edited 1 year ago)

error loading comment

[–] rtfm_modular@lemmy.world 5 points 1 year ago

I’ve definitely seen GPT-4 become faster and the output has been sanitized a bit. I still find it incredibly effective in helping with code reviews where GPT-3 was never helpful in producing useable code snippets. At some point it stopped trying to write large swaths of code and started being a little more prescriptive and you still need to actually implement snippets it provides. But as a tool, it’s still fantastic. It’s like a sage senior developer you can rubber duck anytime you want.

I probably fall in the minority of people who thinks releasing a castrated version of GPT is the ethical approach. People outside the technology bubble don’t have a comprehension of how these models work and the capacity for harm. Disinformation, fake news and engagement algorithms are already social ills that manipulate us emotionally and most people are too technologically illiterate to see how pervasive these problems are already.

[–] zikk_transport2@lemmy.world 2 points 1 year ago

I was talking about it a month ago - others made fun of me.. 😂

load more comments
view more: ‹ prev next ›