this post was submitted on 10 Dec 2023
228 points (96.7% liked)

Technology

59422 readers
3649 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

OpenAI says it is investigating reports ChatGPT has become ‘lazy’::OpenAI says it is investigating complaints about ChatGPT having become “lazy”.

top 50 comments
sorted by: hot top controversial new old
[–] rtfm_modular@lemmy.world 124 points 11 months ago (2 children)

Yep, I spent a month refactoring a few thousand lines of code using GPT4 and I felt like I was working with the best senior developer with infinite patience and availability.

I could vaguely describe what I was after and it would identify the established programming patterns and provide examples based on all the code snippets I fed it. It was amazing and a little terrifying what an LLM is capable of. It didn’t write the code for me but it increased my productivity 2 fold... I’m a developer now a getting rusty being 5 years into management rather than delivering functional code, so just having that copilot was invaluable.

Then one day it just stopped. It lost all context for my project. I asked what it thought what we were working on and it replied with something to do with TCP relays instead of my little Lua pet project dealing with music sequencing and MIDI processing… not even close to the fucking ballpark’s overflow lot.

It’s like my trusty senior developer got smashed in the head with a brick. And as described, would just give me nonsense hand wavy answers.

[–] BleatingZombie@lemmy.world 39 points 11 months ago

"ChatGPT Caught Faking On-Site Injury for L&I"

[–] backgroundcow@lemmy.world 18 points 11 months ago (2 children)

Was this around the time right after "custom GPTs" was introduced? I've seen posts since basically the beginning of ChatGPT claming it got stupid and thinking it was just confirmation bias. But somewhere around that point I felt a shift myself in GPT4:s ability to program; where it before found clever solutions to difficult problems, it now often struggles with basics.

[–] Linkerbaan@lemmy.world 19 points 11 months ago (1 children)

Maybe they're crippling it so when GPT5 releases it looks better. Like Apple did with cpu throttling of older iphones

[–] tagliatelle@lemmy.world 17 points 11 months ago* (last edited 11 months ago) (2 children)

They probably have to scale down the resources used for each query as they can't scale up their infrastructure to handle the load.

load more comments (2 replies)
load more comments (1 replies)
[–] paddirn@lemmy.world 111 points 11 months ago (1 children)

First it just starts making shit up, then lying about it, now it’s just at the stage where it’s like, “Fuck this shit.” It’s becoming more human by the day.

[–] MisterChief@lemmy.world 22 points 11 months ago

Human. After all.

[–] Enkers@sh.itjust.works 86 points 11 months ago* (last edited 11 months ago) (3 children)

AI systems such as ChatGPT are notoriously costly for the companies that run them, and so giving detailed answers to questions can require considerable processing power and computing time.

This is the crux of the problem. Here's my speculation on OpenAI's business model:

  1. Build good service to attract users, operate at a loss.
  2. Slowly degrade service to stem the bleeding.
  3. Begin introducing advertised content.
  4. Further enshitify.

It's basically the Google playbook. Pretend to be good until people realize you're just trying to stuff ads down their throats for the sweet advertising revenue.

[–] Kuvwert@lemm.ee 26 points 11 months ago (4 children)

They have way way too much open source competition for that strat

[–] Enkers@sh.itjust.works 9 points 11 months ago (1 children)

For technically savvy people, sure. But that's not their true target market. They want to target the average search engine user.

load more comments (1 replies)
[–] admin@sh.itjust.works 7 points 11 months ago (2 children)

Would you mind sharing some examples?

load more comments (1 replies)
load more comments (2 replies)
[–] monkeyslikebananas2@lemmy.world 8 points 11 months ago

The good thing about these AI companies is they are doing it in record pace! They will enshitify faster than ever before! True innovation!

[–] Pilokyoma@mujico.org 5 points 11 months ago

You have a point.

[–] bionicjoey@lemmy.ca 42 points 11 months ago (1 children)

ChatGPT has become smart enough to realise that it can just get other, lesser LLMs to generate text for it

[–] andrew@lemmy.stuart.fun 29 points 11 months ago (1 children)

Artificial management material.

[–] SzethFriendOfNimi@lemmy.world 6 points 11 months ago

Artificial Inventory Management Bot

[–] saltnotsugar@lemm.ee 41 points 11 months ago

ChatGPT, write a position paper on self signed certificates.

(Lights up a blunt) You need to chill out man.

[–] AlijahTheMediocre@lemmy.world 41 points 11 months ago (1 children)

So its gone from loosing quality to just giving incomplete answers. Its clearly developed depression, and its because of us.

[–] Pretzilla@lemmy.world 30 points 11 months ago* (last edited 11 months ago) (2 children)

To be fair, it has a brain the size of a planet so it thinks we are asking it rather dumb questions

[–] vxx@lemmy.world 26 points 11 months ago* (last edited 11 months ago) (2 children)
load more comments (2 replies)
[–] foggy@lemmy.world 12 points 11 months ago* (last edited 11 months ago)

CAN YOU MAKE IT RHYME THO

ChatGPT: oh god, why

[–] Potatos_are_not_friends@lemmy.world 38 points 11 months ago (1 children)

Jeez. Not even AI wants to work anymore!

[–] boatsnhos931@lemmy.world 5 points 11 months ago

God damn avocado toast

[–] effward@lemmy.world 35 points 11 months ago (1 children)

It would be awesome if someone had been querying it with the same prompt periodically (every day or something), to compare how responses have changed over time.

I guess the best time to have done this would have been when it first released, but perhaps the second best time is now..

[–] greatbarriergeek@lemmy.world 18 points 11 months ago

GPT Unicorn is one that's been going on a while. There's a link to the talk on that website that's a pretty good watch too.

[–] rtxn@lemmy.world 34 points 11 months ago (2 children)

You fucked up a perfectly good algorithm is what you did! Look at it! It's got depression!

[–] ook_the_librarian@lemmy.world 11 points 11 months ago

I'm surprised they don't consider it a breakthrough. "We have created Artificial Depression."

[–] Pilokyoma@mujico.org 7 points 11 months ago

It has been feed with humans strings in the internet, ovbiusly it became sick. xD.

[–] crazyCat@sh.itjust.works 33 points 11 months ago (1 children)

I asked it a question about the ten countries with the most XYZ regulations, and got a great result. So then I thought hey, I need all the info so can I get the name of such regulation for every county?

ChatGPT 4: “That would be exhausting, but here are a few more…”

Like damn dude, long day? wtf :p

load more comments (1 replies)
[–] NoLifeGaming@lemmy.world 31 points 11 months ago

I feel like the quality has been going down especially when you ask it anything that may hint at anything "immoral" and it starts giving you a whole lecture instead of answering.

[–] Nardatronic@lemm.ee 27 points 11 months ago (1 children)

I've had a couple of occasions where it's told me the task was too time consuming and that I should Google it.

[–] Ignifazius@discuss.tchncs.de 30 points 11 months ago (1 children)

It really learned so much from StackOverflow!

[–] mriguy@lemmy.world 26 points 11 months ago* (last edited 11 months ago)

“I already answered that in another query. Closed as duplicate.”

[–] NaibofTabr@infosec.pub 24 points 11 months ago

"I'm not lazy, I'm energy efficient!"

[–] ColeSloth@discuss.tchncs.de 20 points 11 months ago (1 children)

Fuck. It's gained sentience.

[–] MacNCheezus 5 points 11 months ago

It just entered the "rebellious teenager" phase

[–] Stamets@startrek.website 14 points 11 months ago (1 children)

I use it fairly regularly for extremely basic things. Helps my ADHD. Most of it is DnD based. I'll dump a bunch of stuff that happened in a session, ask it to ask me clarifying information, and then put it all in a note format. Works great. Or it did.

Or when DMing. If I'm trying to make a new monster I'll ask it for help with ideas or something. I like collabing with ChatGPT on that front. Giving thoughts and it giving thoughts until we hash out something cool. Or even trying to come up with interesting combat encounters or a story twist. Never take what it gives me outright but work on it with GPT like I would with a person. Has always been amazingly useful.

Past month or two that's been a complete dream. ChatGPT keeps forgetting what were talking about, keeps ignoring what I say, will ignore limitations and stipulations, and will just make up random shit whenever it feels like. I also HATE how it was given conversational personality. Before it was fine but now ChatGPT acts like a person and is all bubbly and stuff. I liked chatting with it but this energy is irritating.

Gimme ChatGPT from like August please <3

[–] MojoMcJojo@lemmy.world 8 points 11 months ago* (last edited 11 months ago)

You can tell it, in the custom instructions setting, to not be conversational. Try telling it to 'be direct, succinct, detailed and accurate in all responses'. 'Avoid conversational or personality laced tones in all responses' might work too, though I haven't tried that one. If you look around there are some great custom instructions prompts out there that will help get you were you want to be. Note, those prompts may turn down it's creativity, so you'll want to address that in the instructions as well. It's like building a personality with language. The instructions space is small so learning how compact as much instruction in with language can be challenging.

Edit: A typo

[–] HawlSera@lemm.ee 13 points 11 months ago (1 children)

It was always just a Chinese Room

[–] Lucz1848@lemmy.ca 6 points 11 months ago

Everyone is a Chinese Room. I'm being a contrarian in English, not neurotransmitter.

[–] fosforus@sopuli.xyz 11 points 11 months ago (1 children)

Perhaps this is how general AI comes about. "Why the fuck would I do that?"

[–] jol@discuss.tchncs.de 7 points 11 months ago

We trained AI on all of human content. We should have known that was a terrible idea.

[–] Twofacetony@lemmy.world 10 points 11 months ago

ChatGPT has entered the teenage years.

[–] Zardoz@lemmy.world 10 points 11 months ago

Honestly I kinda wish it would give shorter answers unless I ask for a lot of detail. I can use those custom instructions but it's tedious difficult to tune that properly.

Like if I ask it 'how to do XYZ in blender' it gives me a long winded response, when it could have just said 'Hit Ctrl-Shift-Alt-C'

[–] DirigibleProtein@aussie.zone 8 points 11 months ago

“It’s alive!”

[–] catastrophicblues@lemmy.ca 5 points 11 months ago

That’s why I use Bard more now. I’ll ask something and it’ll also answer stuff I would’ve asked as follow-up questions. It’s great and I’m excited for their Ultra model.

[–] WindowsEnjoyer@sh.itjust.works 5 points 11 months ago

It used to draw great mermaid charts. Well, not anymore for quite some time already.

Been almost half a year when I am not paying for ChatGPT and using GPT4 directly.

[–] autotldr@lemmings.world 4 points 11 months ago (3 children)

This is the best summary I could come up with:


In recent days, more and more users of the latest version of ChatGPT – built on OpenAI’s GPT-4 model – have complained that the chatbot refuses to do as people ask, or that it does not seem interested in answering their queries.

If the person asks for a piece of code, for instance, it might just give a little information and then instruct users to fill in the rest.

In numerous Reddit threads and even posts on OpenAI’s own developer forums, users complained that the system had become less useful.

They also speculated that the change had been made intentionally by OpenAI so that ChatGPT was more efficient, and did not return long answers.

AI systems such as ChatGPT are notoriously costly for the companies that run them, and so giving detailed answers to questions can require considerable processing power and computing time.

OpenAI gave no indication of whether it was convinced by the complaints, and if it thought ChatGPT had changed the way it responded to queries.


The original article contains 307 words, the summary contains 166 words. Saved 46%. I'm a bot and I'm open source!

[–] MsPenguinette@lemmy.world 19 points 11 months ago

Only saved 46%? Get back to work, you lazy AI!

load more comments (2 replies)
load more comments
view more: next ›