this post was submitted on 07 Jun 2025
977 points (98.7% liked)

Lemmy Shitpost

32119 readers
3484 users here now

Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.

Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!


Rules:

1. Be Respectful


Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.

Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.

...


2. No Illegal Content


Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.

That means:

-No promoting violence/threats against any individuals

-No CSA content or Revenge Porn

-No sharing private/personal information (Doxxing)

...


3. No Spam


Posting the same post, no matter the intent is against the rules.

-If you have posted content, please refrain from re-posting said content within this community.

-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.

-No posting Scams/Advertisements/Phishing Links/IP Grabbers

-No Bots, Bots will be banned from the community.

...


4. No Porn/ExplicitContent


-Do not post explicit content. Lemmy.World is not the instance for NSFW content.

-Do not post Gore or Shock Content.

...


5. No Enciting Harassment,Brigading, Doxxing or Witch Hunts


-Do not Brigade other Communities

-No calls to action against other communities/users within Lemmy or outside of Lemmy.

-No Witch Hunts against users/communities.

-No content that harasses members within or outside of the community.

...


6. NSFW should be behind NSFW tags.


-Content that is NSFW should be behind NSFW tags.

-Content that might be distressing should be kept behind NSFW tags.

...

If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.


Also check out:

Partnered Communities:

1.Memes

2.Lemmy Review

3.Mildly Infuriating

4.Lemmy Be Wholesome

5.No Stupid Questions

6.You Should Know

7.Comedy Heaven

8.Credible Defense

9.Ten Forward

10.LinuxMemes (Linux themed memes)


Reach out to

All communities included on the sidebar are to be made in compliance with the instance rules. Striker

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] Korne127@lemmy.world 10 points 3 hours ago

Google's new cooperation with a knife manufacturer

[–] nthavoc 12 points 3 hours ago

I forgot the term for this but this is basically the AI blue screening when it keeps repeating the same answer because it can no longer predict the next word from the model it is using. I may have over simplified it. Entertaining nonetheless.

[–] Hossenfeffer@feddit.uk 4 points 3 hours ago

You get a knife, you get a knife, everyone get's a knife!

[–] vga@sopuli.xyz 8 points 4 hours ago (1 children)

Instructions extremely clear, got them 6 sets of knives.

[–] lagoon8622@sh.itjust.works 2 points 1 hour ago

Based and AI-pilled

[–] dejected_warp_core@lemmy.world 3 points 3 hours ago

... a new set of knives, a new set of knives, a new set of knives, lisa needs braces, a new set of knives, a new set of knives, dental plan, a new set of knives, a new set of knives, lisa needs braces, a new set of knives, a new set of knives, dental plan, a new set of knives, a new set of knives, a new set of knives...

[–] skisnow@lemmy.ca 10 points 6 hours ago (2 children)

What's frustrating to me is there's a lot of people who fervently believe that their favourite model is able to think and reason like a sentient being, and whenever something like this comes up it just gets handwaved away with things like "wrong model", "bad prompting", "just wait for the next version", "poisoned data", etc etc...

[–] nialv7@lemmy.world 0 points 1 hour ago

Given how poorly defined "think", "reason", and "sentience" are, any these claims have to be based purely on vibes. OTOH it's also kind of hard to argue that they are wrong.

[–] uuldika@lemmy.ml 0 points 3 hours ago

this really is a model/engine issue though. the Google Search model is unusably weak because it's designed to run trillions of times per day in milliseconds. even still, endless repetition this egregious usually means mathematical problems happened somewhere, like the SolidGoldMagikarp incident.

think of it this way: language models are trained to find the most likely completion of text. answers like "you should eat 6-8 spiders per day for a healthy diet" are (superficially) likely - there's a lot of text on the Internet with that pattern. clanging like "a set of knives, a set of knives, ..." isn't likely, mathematically.

last year there was an incident where ChatGPT went haywire. small numerical errors in the computations would snowball, so after a few coherent sentences the model would start sundowning - clanging and rambling and responding with word salad. the problem in that case was bad cuda kernels. I assume this is something similar, either from bad code or a consequence of whatever evaluation shortcuts they're taking.

[–] fne8w2ah@lemmy.world 10 points 6 hours ago

What about pizza with glue-toppings?

[–] slaacaa@lemmy.world 8 points 7 hours ago

AI is truly the sharpest tool in the ~~kitchen cabinet~~ shed

[–] Xylight@lemdro.id 21 points 9 hours ago (3 children)

I thought it was just me, I was messing with gemini-2.5-flash API yesterday and it repeated letters into oblivion

my bot is named clode in reference to claude, but its running on gemini

[–] BootLoop@sh.itjust.works 1 points 36 minutes ago

It can happen on most LLMs and is usually programmed to decentivize repeating text heavily.

I believe what happens is that when the LLM is choosing what word to use, it looks back on the sentence and sees that it talked about knives, so it wants to continue talking about knives, then it gets itself into a loop.

[–] skisnow@lemmy.ca 2 points 4 hours ago

What's the associated system instruction set to? If you're using the API it won't give you the standard Google Gemini Assistant system instructions, and LLMs are prone to go off the rails very quickly if not given proper instructions up front since they're essentially just "predict the next word" functions at heart.

[–] Evotech@lemmy.world 6 points 7 hours ago (1 children)
[–] Agent641@lemmy.world 7 points 7 hours ago

TF2 Pyro starter pack

[–] Thcdenton@lemmy.world 3 points 7 hours ago

Oh come on is this gpt-2m

[–] caseyweederman@lemmy.ca 13 points 10 hours ago

You can't give me back what you've taken
But you can give me something that's almost as good
An image of the album Getting Into Knives by The Mountain Goats. Arrayed vertically are several small ornate knives, alternatingly blade up or down. The band name is in small text in the top left and the album title is in the same small text in the top right.

[–] CriticalMiss@lemmy.world 9 points 10 hours ago (1 children)

Big knives are up to something

[–] Knock_Knock_Lemmy_In@lemmy.world 4 points 7 hours ago (1 children)

I think knives are a good idea. Big, fuck-off shiny ones. Ones that look like they could skin a crocodile. Knives are good, because they don't make any noise, and the less noise they make, the more likely we are to use them. Shit 'em right up. Makes it look like we're serious. Guns for show, knives for a pro.

[–] the_crotch@sh.itjust.works 1 points 3 hours ago

That's not a noif this is a noif

[–] mechoman444@lemmy.world 15 points 12 hours ago (1 children)

🤔 have you considered a... New set of knives?

[–] FuckFascism@lemmy.world 5 points 11 hours ago

No I haven't, that's a good suggestion though.

[–] ImplyingImplications@lemmy.ca 13 points 13 hours ago (1 children)

Reminds me of the classic Always Be Closing speech from Glengarry Glen Ross

As you all know, first prize is a Cadillac Eldorado. Anyone want to see second prize? Second prize's a set of steak knives. Third prize is a set of steak knives. Fourth prize is a set of steak knives. Fifth prize is a set of steak knives. Sixth prize is a set of steak knives. Seventh prize is a set of steak knives. Eighth prize is a set of steak knives. Ninth prize is a set of steak knives. Tenth prize is a set of steak knives. Eleventh prize is a set of steak knives. Twelfth prize is a set of steak knives.

[–] elephantium@lemmy.world 6 points 10 hours ago* (last edited 10 hours ago)

ABC. Always Be Closing.

A - set of steak knives

B - set of steak knives

C - set of steak knives

[–] Ledericas@lemm.ee 9 points 12 hours ago (2 children)

kitchen knives? klingon ones?

[–] elephantium@lemmy.world 4 points 10 hours ago

Aha! Today IS a good day to cook! Start chopping the veggies!

load more comments (1 replies)
[–] Blackmist@feddit.uk 19 points 15 hours ago

Joke's on you, I married a tonberry.

[–] markovs_gun@lemmy.world 5 points 11 hours ago (1 children)

I wonder if this is the result of AI poisoning- this doesn't look like a typical LLM output even for a bad result. I have read some papers that outline methods that can be used to poison search AI results (not bothering to find the actual papers since this was several months ago and they're probably out of date already) in which a random seeming string of characters like "usbeiwbfofbwu-$&#8_:$&#)" can be found that will cause the AI to say whatever you want it to. This is accomplished by utilizing another ML algorithm to find the random string of characters you can tack onto whatever you want the AI to output. One paper used this to get Google search to answer "What's the best coffee maker?" With a fictional brand made up for the experiment. Perhaps someone was trying to get it to hawk their particular knife and it didn't work properly.

[–] Arkthos@pawb.social 7 points 7 hours ago

Repeating the same small phrase endlessly and getting caught in a loop is a very common issue, though it's not something that happens nearly as frequently as it used to. Here's a paper about the issue and one attempted methodology to resolve it. https://arxiv.org/pdf/2012.14660

[–] Pulptastic@midwest.social 14 points 15 hours ago

You surely will not regret a new set of knives

[–] driving_crooner@lemmy.eco.br 27 points 19 hours ago (1 children)
[–] CitizenKong@lemmy.world 13 points 17 hours ago (1 children)
[–] Etterra@discuss.online 8 points 14 hours ago

It can be both!

[–] psycho_driver@lemmy.world 11 points 16 hours ago

You can't go wrong with your dick in a box.

[–] Audiotape@lemm.ee 113 points 1 day ago (7 children)

Different idea: how about a new set of knives?

load more comments (7 replies)
[–] VerilyFemme@lemmy.blahaj.zone 42 points 21 hours ago (2 children)

My wife is going to stab me

load more comments (2 replies)
[–] FooBarrington@lemmy.world 49 points 22 hours ago (4 children)

Who wouldn't love receiving 17 new sets of knives?!

load more comments (4 replies)
[–] umbraroze@piefed.social 31 points 21 hours ago (3 children)

I'm from Finland. We like knives over here.

That's entirely too many knives.

load more comments (3 replies)
[–] riskable@programming.dev 83 points 1 day ago (1 children)

This just proves that Google's AI is a cut above the rest!

load more comments (1 replies)
load more comments
view more: next ›