this post was submitted on 07 Mar 2025
274 points (97.6% liked)

Buy European

2191 readers
3551 users here now

Overview:

The community to discuss buying European goods and services.

Matrix Chat

Related Communities:

Buy Local:

!buycanadian@lemmy.ca

!buyafrican@baraza.africa

!buyfromeu@feddit.org

Buying and Selling:

!flohmarkt@lemmy.ca

Boycott:

!boycottus@lemmy.ca


Banner credits: BYTEAlliance


founded 1 month ago
MODERATORS
top 16 comments
sorted by: hot top controversial new old
[–] 30p87@feddit.org 93 points 2 days ago* (last edited 2 days ago)

> Boycott US Products
> Uses ChatGPT

[–] adam_y@lemmy.world 35 points 2 days ago (2 children)

Prompt to hallucinating?

Do you mean "Prone"?

That is the sort of mistake an Llm would make.

[–] TheEntity@lemmy.world 46 points 2 days ago

This is precisely the sort of mistake an LLM wouldn't make.

[–] Blaze@lemmy.dbzer0.com 18 points 2 days ago* (last edited 2 days ago) (1 children)

Just got distracted (also English isn't my first language)

[–] musubibreakfast@lemm.ee 12 points 2 days ago (1 children)

Your native tongue is python, you're an LLM, sorry you had to find out this way.

[–] finitebanjo@lemmy.world 9 points 2 days ago (1 children)

I hope you're not installing these on your phone...?

[–] Blaze@lemmy.dbzer0.com 7 points 2 days ago

Definitely not

[–] deczzz@lemmy.dbzer0.com -2 points 2 days ago (1 children)

Devs are aware. This was a quick n dirty prototype and they alright knew the issue with using chatgpt. They did it to make something work asap. In an interview (Danish) the devs recognized this and is moving toward using a LLM developed in French (I forget the name but irrelevant to the point that they will drop chatgpt).

[–] MartianSands@sh.itjust.works 39 points 2 days ago (2 children)

If that's their solution, then they have absolutely no understanding of the systems they're using.

ChatGPT isn't prone to hallucination because it's ChatGPT, it's prone because it's an LLM. That's a fundamental problem common to all LLMs

[–] spechter@lemmy.ml 24 points 2 days ago (1 children)

Plus I don't want some random ass server to crunch through couple hundred watt hours if scanning the barcode and running that against a database would not just suffice but also be more accurate.

[–] jaybone@lemmy.world 14 points 2 days ago (1 children)

More accurate, efficient, environmentally friendly. Why are we trying to solve all of this with LLMs?

[–] EddoWagt@feddit.nl 3 points 2 days ago (1 children)

Its easier to program I suppose, just setup a prompt and give back the result of that

[–] AceStructor@feddit.org 1 points 2 days ago

Exactly, developers can't just come up with a complete database of all products in existence and where they come from, whereas LLMs are already trained on basically all data that is available on the Internet, with additional capabilities to browse the web if necessary. This is a reasonable approach.

[–] DavidGarcia@feddit.nl 0 points 2 days ago (1 children)

phi-4 is the only one I am aware of that was deliberately trained to refuse instead of hallucinating. it's mindblowing to me that that isn't standard. everyone is trying to maximize benchmarks at all cost.

I wonder if diffusion LLMs will be lower in hallucinations, since they inherently have error correction built into their inference process

[–] MartianSands@sh.itjust.works 4 points 2 days ago

Even that won't be truly effective. It's all marketing, at this point.

The problem of hallucination really is fundamental to the technology. If there's a way to prevent it, it won't be as simple as training it differently