126
submitted 11 months ago by Alfa@lemmy.world to c/chatgpt@lemmy.world
all 48 comments
sorted by: hot top controversial new old
[-] Karmmah@lemmy.world 72 points 11 months ago

I also noticed that chatGPT can't actually correct itself. It just says "oh sorry, here's something different" and gives you another crap answer. I noticed it with code specifically. If I remember correctly it was better when it was brand new.

[-] squiblet@kbin.social 25 points 11 months ago

The apology thing is sort of hilarious. I wonder what exactly they did to make it eternally apologetic. There was an article on HN recently about how it is basically impossible to get Chat GPT to stop apologizing, as in, if you ask it to stop, it will apologize for apologizing.

[-] nepenthes@lemmy.world 36 points 11 months ago

As a Canadian, I have also apologized for apologizing 😞

[-] Miimikko@lemmy.world 2 points 11 months ago

I hope you are sorry about that.

[-] isVeryLoud@lemmy.ca 3 points 11 months ago
[-] XTornado@lemmy.ml 2 points 11 months ago

Nice reference although I do not remember where it comes from.

[-] isVeryLoud@lemmy.ca 1 points 11 months ago

Reddit, goalie kicked someone in the face and yelled "ARE YOU FUCKING SORRY???" by mixing up "Are you ok?" And "I'm so fucking sorry".

[-] aquinteros@lemmy.world 1 points 11 months ago

chatgpt is Canadian?

[-] Anticorp@lemmy.ml 12 points 11 months ago

I experienced exactly that! I told it to stop apologizing for everything and just respond with correct answers and it apologized for not being able to stop apologizing.

[-] squiblet@kbin.social 5 points 11 months ago

From HN,

Me: Can you stop using the word apologize whenever you are talking to me?

ChatGPT: I apologize (I will try to stop using the word "apologize" in my responses to you) if the use of the word "apologize" in my responses has been inappropriate or has caused any discomfort. As an AI language model, I am programmed to try to be as helpful and accurate as possible, and I apologize (I will try to use a different word) when I make mistakes or do not provide the information or assistance that is requested.

I will make an effort to avoid using the word "apologize" in my responses to you in the future. Please do not hesitate to let me know if you have any further questions or need any additional information. I am here to help and will do my best to provide accurate and helpful responses to your questions.

Also here is a recent HN discussion: https://news.ycombinator.com/item?id=36949931

[-] Anamnesis@lemmy.world 8 points 11 months ago

This almost seems like it's being a sarcastic little shit

[-] Very_Bad_Janet@kbin.social 7 points 11 months ago

So ChatGPT is now neurotic?

[-] Anticorp@lemmy.ml 4 points 11 months ago

It is obsequious and has a guilt complex.

[-] relevants@feddit.de 8 points 11 months ago

It's because humans have rated potential responses and ChatGPT has been trained to generate the kind of responses that most consistently get preferred rating. You can imagine how an AI trained to say what people want to hear would become a people pleaser.

[-] CeruleanRuin@lemmy.world 4 points 11 months ago

That's what frustrates me the most whenever I try to use it. I tell it to be less verbose, stop over explaining and apologizing every time I correct it, and it just spits out another four paragraphs explaining why it's sorry.

[-] XEAL@lemm.ee 1 points 11 months ago

The only solution I can think of is using it via API with Python and make a call with the final reply asking it to remove apologies from the text, but the token usage will increase.

I do something similar when I need to tell the model to keep the language of a text before performing a task with that text. I send the model a chunk of text and ask it to respond with single word, indicating the language of the text and then I include that in the next prompt like "Your output must be in SPANISH", or whatever.

[-] peyotecosmico@programming.dev 7 points 11 months ago* (last edited 11 months ago)

Did you dare to say it became dumb when it interacted with us?

How dare you? /s

Ahem Tay tweets

[-] LetterboxPancake@sh.itjust.works 6 points 11 months ago

Like that Twitter bot that turned racist after talking to some people for a while.

[-] AustralianSimon@lemmy.world 2 points 11 months ago

Microsoft's AI "Tay"

[-] BadRS@lemmy.world 32 points 11 months ago

It cannot read. It doesn't see words or letters. It works with Tokens which words are converted into. It cant count the number of letters in a word because it can't see them. OpenAI has a Tokenizer you can plug a prompt into to see how its broken up, but youre asking a fish to fly.

[-] bogdart@lemmy.world 20 points 11 months ago

You aslo raed in tkoens, no dfifrecene heer

[-] ParkingPsychology@kbin.social 5 points 11 months ago

And a single "S" is also a token. Which has vectors to all other words that start with an S.

One thing to point out here is that the word sentences is severely mistyped as "sententences". That's not going to help.

[-] CeruleanRuin@lemmy.world 1 points 11 months ago

Is there a workaround to "trick" it into understanding letters? I'd love to use it to play with language and brainstorm some riddles or other wordplay, but if it literally can't understand language on a human level, that's a fools errand.

[-] HappySword@feddit.de 23 points 11 months ago

I asked it how many "n" mayonnaise has and it came up with manaonnanaise

[-] Karmmah@lemmy.world 21 points 11 months ago

Im not even mad, that's a great answer.

[-] LetterboxPancake@sh.itjust.works 4 points 11 months ago

It's not a good one. Or correct. But I still laughed. And I'm relieved that one won't cost too many jobs at least in this version.

[-] squiblet@kbin.social 3 points 11 months ago

I feel like if these things ever become really self aware, they will be super fucking with us

[-] Anamnesis@lemmy.world 1 points 11 months ago

It's gonna be an Iain M Banks kind of super intelligence, for sure.

[-] Fingerthief@infosec.pub 11 points 11 months ago

Idk what I’m doing wrong, thankfully it always seems to listen and work fine for me lmao

[-] FrostyTheDoo@lemmy.world 19 points 11 months ago

The second sentence also had an s in it

[-] Fingerthief@infosec.pub 7 points 11 months ago* (last edited 11 months ago)

Now it’s broken, I guess I I don’t use it this way often enough. Interesting nonetheless!

Edit - it’s very semantic, it matters if I include an uppercase “S” or not. That’s amusing.

I wonder if the temperature settings adjustment would fix that or just make it even weirder.

[-] nieceandtows@programming.dev 3 points 11 months ago

The original reply included both s(2) and S(10) in it

[-] Fingerthief@infosec.pub 3 points 11 months ago* (last edited 11 months ago)

But it’s able to correct unlike what’s shown in the OP messages.

Extremely semantically it seems but it clearly listens. It's neat to see how different each person experience is.

Also different tuning parameters etc..could make outputs different. That might explain why mine is seemingly a bit better at listening.

[-] HappySword@feddit.de 1 points 11 months ago
[-] Fingerthief@infosec.pub 1 points 11 months ago

Look at the first question in the my first screenshot. It gets that question correct for “mayonnaise” lol

[-] HappySword@feddit.de 2 points 11 months ago

It got it wrong when I asked it to list them

[-] Fingerthief@infosec.pub 1 points 11 months ago* (last edited 11 months ago)

I feel like ChatGPT itself probably has a fairly loose temp setting (just a hunch) and I tend to set my conversations up to be more on the strict side

I imagine that’s why our results differ, it’s strange OpenAI doesn’t let ChatGPT site users or at least premium users adjust anything really yet.

[-] rarely@sh.itjust.works 8 points 11 months ago

Y'all seem to gloss over the word artificial when it comes to reading "artificial intelligence". That or you're leaning too hard on the first definition..

  1. made or produced by human beings rather than occurring naturally, especially as a copy of something natural. "her skin glowed in the artificial light"
  2. (of a person or their behavior) insincere or affected. "an artificial smile" 🤖
[-] CeruleanRuin@lemmy.world 1 points 11 months ago

It's just so counterintuitive for a layman to have this tool that can write long flowing passages of text and theoretically pass a rudimentary Turin test, but it can't even begin to work with language on the level most toddlers can. We humans typically have to learn letters before we move up to words, sentences, paragraphs, and finally whole compositions. But this thing skipped right over the first several milestones and has no mechanism for reverse engineering that capability.

[-] cll7793@lemmy.world 7 points 11 months ago

Alignment at its finest.

[-] CeruleanRuin@lemmy.world 6 points 11 months ago

ChatGPT doesn't understand letters, or phonetics, or most other aspects of speech. I tried for an hour to train it to understand what a palindrome is, with the hopes of getting it to generate some new ones. Nothing stuck. It was like trying to teach a dog to write its name.

[-] Jay@sh.itjust.works 5 points 11 months ago
[-] starman@programming.dev 3 points 11 months ago* (last edited 11 months ago)
[-] Anticorp@lemmy.ml 2 points 11 months ago

It has not. ChatGPT has been a monumental achievement and has been capable of performing previously impossible and highly impressive tasks. This is new behavior for it.

[-] kratoz29@lemmy.world 3 points 11 months ago

To be fair that feature sucked since the very beginning, at least for me.

this post was submitted on 03 Aug 2023
126 points (95.7% liked)

ChatGPT

8652 readers
47 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 1 year ago
MODERATORS