this post was submitted on 07 Mar 2024
354 points (93.8% liked)

Showerthoughts

33252 readers
549 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.

Here are some examples to inspire your own showerthoughts:

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. No politics
    • If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
    • A good place for politics is c/politicaldiscussion
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct and the TOS

If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.

Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.

founded 2 years ago
MODERATORS
 

it will loose its ability to differentiate between there and their and its and it’s.

top 50 comments
sorted by: hot top controversial new old
[–] spittingimage@lemmy.world 158 points 1 year ago (2 children)
[–] public_image_ltd@lemmy.world 133 points 1 year ago (4 children)

must of made a mistake their

[–] person@lemm.ee 64 points 1 year ago (2 children)
[–] public_image_ltd@lemmy.world 55 points 1 year ago (1 children)
[–] Rentlar@lemmy.ca 12 points 1 year ago (1 children)

OP hasn't payed enough attention in English class.

load more comments (1 replies)
load more comments (3 replies)
[–] zkfcfbzr@lemmy.world 5 points 1 year ago (1 children)
load more comments (1 replies)
[–] Ghostalmedia@lemmy.world 103 points 1 year ago (1 children)

Now when you submit text to chat GPT, it responds with “this.”

[–] Steve@startrek.website 47 points 1 year ago (1 children)
[–] FartsWithAnAccent@lemmy.world 32 points 1 year ago (1 children)
[–] inlandempire@jlai.lu 35 points 1 year ago (1 children)

As a language model, I laughed at this way harder than I should have

[–] summerof69@lemm.ee 8 points 1 year ago

NTA, that was funny.

[–] BoxerDevil@lemmy.world 42 points 1 year ago (1 children)

And it will get LOSE and LOOSE mixed up like you did

load more comments (1 replies)
[–] circuitfarmer@lemmy.world 31 points 1 year ago

I'm waiting for it to start using units of banana for all quantities of things

[–] raunz@mander.xyz 24 points 1 year ago (3 children)

ChatGPT trained used Reddit posts -> ChatGPT goes temporarily “insane”

Coincidence? I don't think so.

[–] public_image_ltd@lemmy.world 11 points 1 year ago (2 children)

This is exactly what I was thinking.

And maybe some more people did what i did. Not deleting my accounts but replacing all my posts with content created by a bullshit-generator. Made texts look normal, but everything was completely senseless.

Back in june-july, I used a screen tapping tool + boost to go through and change every comment i could edit with generic type fill, then waited something like 2 weeks in hopes that all of their servers would update to the new text, and then used the same app to delete each comment and post, and then the account itself. Its about all I could think to do.

load more comments (1 replies)
[–] FiskFisk33@startrek.website 6 points 1 year ago (1 children)

They have always trained on reddit data, like, gpt2 was, i'm unsure about gpt1

load more comments (1 replies)
load more comments (1 replies)
[–] Infynis@midwest.social 18 points 1 year ago (1 children)

ChatGPT also chooses that guy's dead wife

[–] Chainweasel@lemmy.world 7 points 1 year ago

The Narwhal Bacons at Midnight.

[–] londos@lemmy.world 18 points 1 year ago (3 children)

It also won't be able to differentiate between a jackdaw and a crow.

load more comments (3 replies)
[–] starman2112@sh.itjust.works 15 points 1 year ago

On the contrary, it'll becomes excessively perfectionist about it. Can't even say "could have" without someone coming in and saying "THANK YOU FOR NOT SAYING OF"

[–] Daxtron2@startrek.website 14 points 1 year ago

It already was, the only difference is that now reddit is getting paid for it.

[–] bitchkat@lemmy.world 13 points 1 year ago

Its going to be a poop knife wielding guy with 2 broken arms out to get those jackdaws.

[–] Norgur@fedia.io 13 points 1 year ago (3 children)

From now on, when you say something like "I think I can give my hoodie to my girlfriend", it will answer"and my axe""

load more comments (3 replies)
[–] thantik@lemmy.world 13 points 1 year ago (1 children)

It was already trained on Reddit posts. It's just now they're paying for it.

load more comments (1 replies)
[–] outerspace@lemmy.zip 13 points 1 year ago (3 children)
load more comments (3 replies)
[–] driving_crooner@lemmy.eco.br 12 points 1 year ago (3 children)

ChatGPT was already trained on Reddit data. Check this video to see how one reddit username caused bugs on it: https://youtu.be/WO2X3oZEJOA?si=maWhUpJRf0ZSF_1T

load more comments (3 replies)
[–] PurpleSheeple@lemmy.world 12 points 1 year ago (1 children)

And between were, we’re and where.

[–] db2@lemmy.world 8 points 1 year ago

Insure and ensure.

[–] YoorWeb@lemmy.world 10 points 1 year ago (2 children)

It will also reply "Yes." to questions "is it A or B?".

load more comments (2 replies)
[–] Witchfire@lemmy.world 9 points 1 year ago (1 children)

Don't forget the bullshit that is "would of"

load more comments (1 replies)
[–] JackLSauce@lemmy.world 8 points 1 year ago

"Can't even breath"

[–] kescusay@lemmy.world 8 points 1 year ago

Your right.

[–] shalafi@lemmy.world 8 points 1 year ago (2 children)

"What is a giraffe?"

ChatGPT: "geraffes are so dumb."

load more comments (2 replies)
[–] SoyTDI@lemmy.world 7 points 1 year ago

And then and than.

[–] AnAustralianPhotographer@lemmy.world 6 points 1 year ago (1 children)

And when it learns something new, the response will be "Holy Hell".

[–] mannonym@lemmy.world 6 points 1 year ago (1 children)

Sure it might have some effect, but a big part of ChatGPT besides "raw" training data is RLHF, reinforcement learning from human feedback. Realistically, the bigger problem is training on AI-generated content that might have correct spelling, but hardly makes sense.

[–] public_image_ltd@lemmy.world 5 points 1 year ago

Then I did the right thing by replacing my texts with correct spelled nonsense.

[–] Feathercrown@lemmy.world 6 points 1 year ago

Is it a showerthought if it's actually just incorrect

[–] wargreymon2023@sopuli.xyz 5 points 1 year ago

The same for Gemini, Google brought its api

load more comments
view more: next ›