this post was submitted on 19 Jan 2024
384 points (98.2% liked)

Technology

59605 readers
4488 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

ChatGPT's new AI store is struggling to keep a lid on all the AI girlfriends::OpenAI: 'We also don’t allow GPTs dedicated to fostering romantic companionship'

you are viewing a single comment's thread
view the rest of the comments
[–] _number8_@lemmy.world 33 points 10 months ago (9 children)

why? why not let people just retreat into fantasy? it's probably healthier than many common coping mechanisms. i mean, it's a chatbot, how much can you do with it?

let people have their temporary salve to get them thru whatever they were going thru such that they were resorting to this. and if it's not temporary, ok, fine? better to have some outlet than be even more mentally isolated. maybe in 50 years this will be common, who knows.

[–] cyd@lemmy.world 55 points 10 months ago (2 children)

Liability. Imagine an AI girlfriend who slowly earns your affection, then at some point manipulates you into sending bitcoins to a prespecified wallet set up by the model maker. Because models are black boxes, there is no way to verify by direct inspection that an AI hasn't been trained with an ulterior agenda (the "execute order 66" problem).

[–] Kittenstix@lemmy.world 5 points 10 months ago (1 children)

Yep, I was having a conversation with a guy that informs policy makers on ai, he had given a whole presentation to a school board meeting I went to a few nights ago.

He said that's his highest recommendation when it comes to what should be done on the lawmaker side, pass bills that push for opening up those black boxes so we can ensure transparency.

[–] cyd@lemmy.world 9 points 10 months ago (1 children)

Problem is, there isn't a way to open up the black boxes. It's the AI explainability problem. Even if you have the model weights, you can't predict what they will do without running the model, and you can't definitively verify that the model was trained as the model maker claimed.

[–] Kittenstix@lemmy.world 1 points 10 months ago

I see, my knowledge is surface deep so I admit this is new information to me.

Is there no way to ensure LLMs are safe for like kids to use as a tool for education? Or is it just inherently going to come with some risk of exploitation and we just have to do our best to educate students of that danger?

[–] dirthawker0@lemmy.world 4 points 10 months ago

Some guy in the UK was allegedly convinced by his chatbot girlfriend to assassinate Queen Elizabeth. He just got sentenced a few months ago. Of course he's been determined to be psychotic, but I could imagine people who would qualify as sane getting too deep and reading too much into what an AI is saying.

[–] devfuuu@lemmy.world 18 points 10 months ago (2 children)

These kinds of things are not temporary. We know that humans can't control themselves and aren't rational enough to "just use it a bit". It's highly addictive and leads to people to remove themselves from reality.

[–] UrPartnerInCrime@sh.itjust.works 7 points 10 months ago (2 children)

Why can't we let people do what they want?

[–] endhits@lemmy.world 2 points 10 months ago (1 children)

Because social ills effect e everyone. People are not islands.

[–] UrPartnerInCrime@sh.itjust.works -3 points 10 months ago* (last edited 10 months ago)

You're telling me that all nearly 8 billion people on this planet are crucial to society? Forget that we as a society sometimes condem people to solitary confinement or prison for life, every single person is mandatory for society to survive? Without 100% cooperation everyone is doomed to fail?

[–] Siegfried@lemmy.world 1 points 10 months ago (1 children)

What if the AI starts suggesting illegal things and they become someone's partner in crime?

[–] UrPartnerInCrime@sh.itjust.works 4 points 10 months ago

Good thing people don't suggest illegal activities and cause major problems for people. It would be really bad if people were criminals. Glad it's only robots that suggest people become bad.

[–] Mojojojo1993@lemmy.world 12 points 10 months ago (1 children)

I believe Futurama has a lesson on this

[–] TexasDrunk@lemmy.world 12 points 10 months ago

I knew I should've shown him Electro-Gonorrhea: The Noisy Killer

[–] r3df0x@7.62x54r.ru 10 points 10 months ago (2 children)

Because it drives people even deeper into self destructive incel behaviors.

[–] Unforeseen@sh.itjust.works -4 points 10 months ago* (last edited 10 months ago)
[–] webghost0101@sopuli.xyz 10 points 10 months ago (1 children)

I am pretty sure its just to avoid controversy, look up the recent news about "laion" for an example, gpt4 isn't just text anymore, it can generate images also.
Altman talked about we may sometime all have our own personal AI's tailored to our own needs and sensitivities. But almost everyone has a different idea of if and where there should be a line.

[–] douglasg14b@lemmy.world 15 points 10 months ago (1 children)

If I have an AI tailored for me and my sensitivities then it should have no filter whatever filter it has should be defined and trained by me.

Someone else artificially trying to adjust my personality through AI to fit whatever arbitrary norms they believe it should have is cancer.

[–] webghost0101@sopuli.xyz 3 points 10 months ago* (last edited 10 months ago)

I am inclined to agree, i believe that once society is able to fill everyone's needs and everyone can summon any ai vr experience they want crime will stop to exist, there would be nothing to gain from committing harm. But i fear the simulated role-play in the context of psychological torture, csam could lead to making dangerous people more confident before we get to that post-scarcity. Maybe you say chatgpt inst realistic enough for it now, but i will be soon.

training an LLM entirely by yourself with self curated text is beyond what is feasible, most ai researched today dont even know whats in all of the data they use. Its more then you can look at even with an extended lifetime and at best you can fine-tune a standard base model.

[–] RainfallSonata@lemmy.world 7 points 10 months ago* (last edited 10 months ago)

let people have their

I'd be very interested to see the gender breakdown, here.

[–] leftzero@lemmynsfw.com 4 points 10 months ago* (last edited 10 months ago) (1 children)

Main problem I see with this is that when the AI girlfriend company inevitably eventually folds, or dumbs down the product, or makes it start pushing ads instead of loving words, or succumbs to enshittification in any other way (which has already happened with at least a couple of models people were using as AI girlfriends) the users have to deal not only with going back to loneliness, but with the equivalent of the death of a loved one to boot. It's not unlikely that some will end up hurting themselves or others as a consequence.

I mean, this is Lemmy, for fuck's sake. I think we can all here agree that the whole concept is abhorrent, exploitative, and doomed from the start. What we evidently need are self hosted, open source AI companions, backed by a healthy community developing forks and extensions to cater to any and all imaginable (or unimaginable) kinks and / or fetishes, not this cloud based corporate-driven dystopian AI nightmare we seem to be heading to.

[–] _number8_@lemmy.world 3 points 10 months ago

Now this is a great answer; well thought out. Very prescient as well I'm sure.

[–] ExLisper@linux.community 4 points 10 months ago

I guess they don't want to create separate NSFW category that has to be treated in a different way. They probably think it's just to risky to get involved in that type of business.

[–] CameronDev@programming.dev 1 points 10 months ago (1 children)
[–] PipedLinkBot@feddit.rocks 2 points 10 months ago

Here is an alternative Piped link(s):

https://m.piped.video/watch?v=3WSKKolgL2U

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.