I'm pretty sure chat bots are biased to make polite conversation. Most real people won't spend the energy in a conversation to be more honest than they think you are.
Can either get better at sounding honest or talk with less honest people.
Welcome to the Unpopular Opinion community!
How voting works:
Vote the opposite of the norm.
If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.
Guidelines:
Tag your post, if possible (not required)
Rules:
1. NO POLITICS
Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.
2. Be civil.
Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Shitposts and memes are allowed but...
Only until they prove to be a problem. They can and will be removed at moderator discretion.
5. No trolling.
This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.
Instance-wide rules always apply. https://legal.lemmy.world/tos/
I'm pretty sure chat bots are biased to make polite conversation. Most real people won't spend the energy in a conversation to be more honest than they think you are.
Can either get better at sounding honest or talk with less honest people.
Robot realizes is robot by talk to robot.
It carries the emotions and personal biases of the source material It was trained on.
It sounds like you are training yourself to be a poor communicator, abandoning any effort to become more understandable to actual humans.
As long as you're still engaging with real humans regularly, I think that it's good to learn from ChatGPT. It gets most general knowledge things right. I wouldn't depend on it for anything too technical, and certainly not for medical advice. It is very hit or miss for things like drug interactions.
If you're enjoying the experience, it's not much different than watching a show or playing a game, IMHO. Just don't become dependent on it for all social interaction.
As for the jerks on here, I always recommend aggressive use of the block button. Don't waste time and energy on them. There's a lot of kind and decent people here, filter your feed for them.
As for the jerks on here, I always recommend aggressive use of the block button. Don’t waste time and energy on them. There’s a lot of kind and decent people here, filter your feed for them.
My blocklist is around 500 users long and grows every day. I do it for the pettiest reasons but it does, infact work. When I make a thread such as this one, I occasionally log out to see the replies I've gotten from blocked users and more often than not (but not always) they're the kind of messages I'd block them again for. Not to create and echo-chamber but to weed out the assholes.
Have you ever tried inputting sentences that you've said to humans to see if the chatbot understand your point better? That might be an interesting experiment if you haven't tried it already. If you have, do you have an example of how it did better than the human?
I'm kinda amazed that it can understand your accent better than humans too. This implies Chatbots could be a great tool for people trying to perfect their 2nd language.
A couple of times, yes, but more often it's the other way around. I input messages from other users into ChatGPT to help me extract the key argument and make sure I’m responding to what they’re actually saying, rather than what I think they’re saying. Especially when people write really long replies.
The reason I know ChatGPT understands me so well is from the voice chats we've had. Usually, we’re discussing some deep, philosophical idea, and then a new thought pops into my mind. I try to explain it to ChatGPT, but as I'm speaking, I notice how difficult it is to put my idea into words. I often find myself starting a sentence without knowing how to finish it, or I talk myself into a dead-end.
Now, the way ChatGPT usually responds is by just summarizing what I said rather than elaborating on it. But while listening to that summary, I often think, "Yes, that’s exactly what I meant," or, "Damn, that was well put, I need to write that down."
So what you're saying if I'm reading right is chatbots are great for bouncing ideas off of to help you explain yourself better as well as helping you gather your own thoughts. im a bit curious about your philosophy chats.
When you have a philosophical discussion does the chatbot summarize your thoughts in its responses or is it more humanlike maybe disagreeing/bringing up things you hadn't thought of like a person might? (I've never used one).
I've read this text. It's a good piece, but unrelated to what OP is talking about.
The text boils down to "people who believe that LLMs are smart do so for the same reasons as people who believe that mentalists can read minds do." OP is not saying anything remotely close to that; instead, they're saying that LLMs lead to pleasing and insightful conversations in their experience.
they're saying that LLMs lead to pleasing and insightful conversations in their experience.
Yeah, as would eliza (at a much lower cost).
It's what they're designed to do.
But the point is that calling them conversations is a long stretch.
You're just talking to yourself. You're enjoying the conversation because the LLM is simply saying what you want to hear.
There's no conversation whatsoever going on there.
Yeah, as would eliza (at a much lower cost).
Neither Eliza nor LLMs are "insightful", but that doesn't stop them from outputting utterances that a human being would subjectively interpret as such. And the later is considerably better at that.
But the point is that calling them conversations is a long stretch. // You’re just talking to yourself. You’re enjoying the conversation because the LLM is simply saying what you want to hear. // There’s no conversation whatsoever going on there.
Then your point boils down to an "ackshyually", on the same level as "When you play chess against Stockfish you aren't actually «playing chess» as a 2P game, you're just playing against yourself."
This shite doesn't need to be smart to be interesting to use and fulfil some [not all] social needs. Specially in the case of autists (as OP mentioned to be likely in the spectrum); I'm not an autist myself but I lived with them for long enough to know how the cookie crumbles for them, opening your mouth is like saying "please put words here, so you can screech at me afterwards".
I talk with chat gpt too sometimes and I get where you are coming from. However it’s not always right either. It says it was updated in September but still refuses to commit to memory that Trump was convicted 34 times earlier this year. Why is that?
It could respond in other ways if it was trained to do so. My first local model was interesting as I changed its profile to have a more dark and sarcastic tone, and it was funny to see it balance that instruction with the core mode to be friendly and helpful.
The point is, current levels of LLMs are just telling you what you want to hear. But maybe that's useful as a sounding board for your own thoughts. Just remember its limitations.
Regardless of how far AI tech goes, the human-AI relationship is something we need to pay attention to. People will find it a good tool like OP, but it can be easy to get sucked into thinking it's more than it is and becoming a problem.
Autism and social unawareness may be a factor. But points you made like the snide remarks one may also indicate that you're having these conversations with assholes.
Well, it's a self-selecting group of people. I can't comment on the ones who don't respond to me, only on the ones who do and for some reason the amount of assholes seems to be quite high in that group. I just don't feel like it's warranted. While I do have a tendency to make controversial comments I still try and be civil about it and I don't understand the need to be such a dick about it even if someone disagrees with me. I welcome disagreement and are more than willing to talk about it as long as it's done in good faith.
Sorry, just to clarify. Are you saying you're having these conversations with people on person or online?
Online for the most part. Face to face it's much easier to explain my views, as well as to jump in when the other person starts talking and I notice they misunderstood me.
Also, I just went into your comment history and took a quick peek. Your latest "unpopular" opinion seems to be because you disregarded the lives of civilians from the most recent attack by Israel to assassinate Nasrallah. You come across as quite callous trying to justify the murder of hundreds/thousands all to attack one individual. Stuff like that rubs people the wrong way since you seem to display a very morally and ethically wrong opinion when you can't even seem to acknowledge the horrendous loss of life.
Personally, I wouldn't consider online debates as debating a person. The reason being is you have no idea the person you're having this conversation with is a 12 year old with too much time on their hands or a 30 year old working at a troll farm. Even if they were a genuine person you're debating with, sites like Lemmy enable assholes to actually be assholes. They can say things here that would have them socially shunned or even assaulted in real life with virtually no consequence. I've had debates with individuals on this site that I actually liked, but more often than not, I was just debating assholes. I guess what I'm trying to say is that if you're actually interested in discussing topics, try doing it with people in your life instead of online. Doesn't have to be a debate even. You can just ask how they feel about a certain topic and talk about it together. Doscussing/debating online isn't a bad thing. Just be prepared for more assholes given the medium.
Finding people interested in talking about the topics I'm actually interested is really, really hard in real life. Obviously I'd prefer it that way too but easier said than done. I do have good conversations and debates with people online too but I just need to go thru quite the few assholes before finding one that's actually doing it in good faith.
What subjects are you talking about that people assume views you don't have? Politics?
It's a mirror. I use it a lot for searching and summarizing. Most of its responses are heavily influenced by how you talk to it. You can even make it back up terrible assumptions with enough brute force.
Just be careful.
Ur just training urself to have chatgpt's bias. We will soon live in a world where you wont have to be exposed to opinions you disagree with. Tom Scott has a yt vid on why this is a bad idea.
My impressions are completely different from yours, but that's likely due
Even then, I know which sort of people you're talking about, and... yeah, I hate a lot of those things too. In fact, one of your bullet points ("it understands and responds...") is what prompted me to leave Twitter and then Reddit.