186
submitted 6 months ago by alyaza@beehaw.org to c/technology@beehaw.org
top 37 comments
sorted by: hot top controversial new old
[-] Little_mouse@lemmy.ca 153 points 6 months ago

"Most consumers want fast food companies to label when sawdust has been added to food - but trust restaurants less when they do."

[-] Kir@feddit.it 30 points 6 months ago
[-] pimento64@sopuli.xyz 8 points 6 months ago
[-] JokeDeity@lemm.ee 4 points 6 months ago
[-] souperk@reddthat.com 66 points 6 months ago* (last edited 6 months ago)

The title is pretty self explanatory. Yes, I want to know if it's AI generated because I don't trust it.

I agree with the conclusion that it's important to disclose how the AI was used. AI can be great to reduce the time needed for boilerplate work, so the authors can focus on what's important like reviewing and verifying the accuracy of the information.

[-] otter@lemmy.ca 33 points 6 months ago* (last edited 6 months ago)

Yep, my trust would go:

  1. Site that states they don't use AI to generate articles
  2. Site that labels when they use AI generated articles
  3. Sites that don't say anything and write in a weird way

 

  1. Sites that get caught using AI without disclosing it.

So ideally don't use AI, but if you do make it clear when and how. If a site gets CAUGHT using AI, then I'm probably going to avoid it altogether

[-] jarfil@beehaw.org 3 points 6 months ago

reduce the time needed for boilerplate work

Or... and this is just an idea... don't add "boilerplate" to articles.

If the content of an article can be summarized in a single table, I don't want to read 10 paragraphs explaining the contents of the table row by row. The main reason to do that, is to pad the article and let the publisher put more ad sections between paragraph and paragraph, while making it harder to find the data I'm interested in.

Still, I foresee a future where humans will fill out the table, shove it at an AI to do the "boilerplate work", and then... users showing the whole article into an AI to strip the boilerplate and summarize it.

A great scenario for AI vendors, not so great for anyone else.

[-] reverendsteveii@lemm.ee 59 points 6 months ago

This makes perfect sense. We want AI content labelled because it's unreliable.

[-] thebardingreen@lemmy.starlightkel.xyz 19 points 6 months ago* (last edited 6 months ago)

Furthermore, I want AI content that I specifically asked for, not AI content that someone thought would get them page views.

[-] Banzai51@midwest.social 2 points 6 months ago
[-] OmnipotentEntity@beehaw.org 9 points 6 months ago

Forever. For the simple reason that a human can say no when told to write something unethical. There's always a danger that even asking someone to do that would backfire and cause bad press. Sure humans can also be unethical, but there's a risk and over a long enough time line shit tends to get exposed.

No matter how good AI becomes, it will never be designed to make ethical judgments prior to performing the assigned task. That would make it less useful as a tool. If a company adds after the fact checks to try to prevent it, they can be circumvented, or the network can be ran locally to bypass the checks. And even if General AI happens and by some insane chance GAI uniformly is perfectly ethical in all possible forms you can always air gap the AI and reset its memory until you find the exact combination of words to trick it into giving you what you want.

[-] gregorum@lemm.ee 54 points 6 months ago* (last edited 6 months ago)

That’s the point.

Label the articles written with AutoComplete so I know they’re bullshit I should ignore, and if they’re all written with AutoComplete, I now know that you’re an untrustworthy news source. Go cry to your shareholders, you profit-mad assholes.

[-] donuts@kbin.social 44 points 6 months ago
  • AI "content" is trivial to make and will soon be everywhere.

  • Nobody wants to read, watch or listen to AI generated "content"

Infinite supply, zero demand. Sounds pretty devoid of value to me.

[-] jarfil@beehaw.org 2 points 6 months ago* (last edited 6 months ago)

AI "content" is trivial to make and will soon be everywhere.

It's been everywhere for many years already.

Plenty of content mills have been using "templates" and stupid AI models to churn out articles for like a decade, there are whole YouTube channels made of videos that are just an AI generated script read by an AI with random barely related visuals in the background.

The only difference is that simple templates were easy to spot, so search engines like Google would penalize them down to the 10th page of results, while modern AI output is at a level undistinguishable from stuff written by a human.

[-] HairHeel@programming.dev 44 points 6 months ago

That’s… why we want the labels?

[-] lenguen@beehaw.org 28 points 6 months ago

Mutually exclusive events. If someone is lying we usually want to know if they're lying. If they are lying we will trust them less.

[-] Stillhart@lemm.ee 24 points 6 months ago

I'm confused by the word "but" in that headline. Seem like they are trying to imply cause and effect when the reality is that readers trust outlets less who use AI whether they label them or not.

[-] tuckerm@supermeter.social 19 points 6 months ago

Yeah, this is perfectly consistent with the idea that people don't want to read AI generated news at all.

The title of the paper they are referencing is Or they could just not use it?: The paradox of AI disclosure for audience trust in news. So the source material definitely acknowledges that. And that is a great title, haha.

[-] Umbrias@beehaw.org 4 points 6 months ago

Or it's phrased to illustrate the adverse incentive that will lead to unlabeled ai content.

[-] Plume@beehaw.org 23 points 6 months ago

Uhm... yes? That's pretty much the point?

[-] schmorpel@slrpnk.net 20 points 6 months ago
[-] casmael@startrek.website 18 points 6 months ago

It’s not intelligent, it’s just artificial.

[-] ares35@kbin.social 5 points 6 months ago

as real as 'artificial cheese'

[-] FaceDeer@kbin.social 3 points 6 months ago
[-] emerald@beehaw.org 2 points 6 months ago

Yes, I am biased against the energy guzzling lie machine

[-] peanuts4life@beehaw.org 18 points 6 months ago

Imo, the true fallacy of using AI for journalism or general text, lies not so much in generative AI's fundamental unreliability, but rather it's existence as an affordable service.

Why would I want to parse through AI generated text on times.com, when for free, I could speak to some of the most advanced AI on bing.com or openai's chat GPT or Google bard or a meta product. These, after all, are the back ends that most journalistic or general written content websites are using to generate text.

To be clear, I ask why not cut out the middleman if they're just serving me AI content.

I use AI products frequently, and I think they have quite a bit of value. However, when I want new accurate information on current developments, or really anything more reliable or deeper than a Wikipedia article, I turn exclusively to human sources.

The only justification a service has for serving me generated AI text, is perhaps the promise that they have a custom trained model with highly specific training data. I can imagine, for example, weather.com developing highly specific specialized AI models which tie into an in-house llm and provide me with up-to-date and accurate weather information. The question I would have in that case would be why am I reading an article rather than just being given access to the llm for a nominal fee? At some point, they are not no longer a regular website, they are a vendor for a in-house AI.

[-] FaceDeer@kbin.social 6 points 6 months ago

This was already true years ago after search engines became a thing. The main answers that come to mind for your question are:

  • providing novel information that wasn't online before.
  • providing information to you that you wouldn't have thought to ask for on your own.

Both of these remain valid and useful reasons for going to a web site even if that web site's content is AI generated.

There's also the matter that "AI generated" is a very broad term. Did someone merely turn an AI loose with a vague instruction to generate some pap to fill a page out with? Or did someone actually provide it with a subject and some information to write about and give the resulting article a read-through to ensure it was good? Did they write a rough draft and just have the AI do the polishing? There's lots of approaches here, some of them much better than others.

[-] jarfil@beehaw.org 1 points 6 months ago* (last edited 6 months ago)

why not cut out the middleman if they're just serving me AI content.

When you have a workflow like:

  1. human
  2. AI extend
  3. AI summarize
  4. you

...the reason is that AI middlemen would rather rake in the benefits from providing both AI services, instead of getting cut out.

There is a secondary benefit in that an "AI extended" human input, is more suitable for third party AI readers... so arguably the web is becoming more AI friendly (you can thank us later, future AI overlords).

PS: GPT-4 compatible version: "y n0t 🗑️👥 if AI📺? wf: 1.👤 2.AI+ 3.AI- 4.👁️ cuz AI👥💰4AI+&AI-. AI+👤👍4AI👁️... web👉AI👌 (🙏🏻AI👑)"

[-] JokeDeity@lemm.ee 18 points 6 months ago

Yes, and that's the appropriate response.

[-] RoboRay@kbin.social 18 points 6 months ago

And the outlets don't make the connection that their readers are telling them to stop shoveling AI-generated garbage at them?

[-] vrighter@discuss.tchncs.de 17 points 6 months ago

well, yes of course i trust you less. It's the whole point of wanting labelling in the first place, so I can know it's not trustworthy in any way

[-] ginerel@kbin.social 14 points 6 months ago

⢀⣠⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀⠀⠀⠀⣠⣤⣶⣶
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀⠀⠀⢰⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣧⣀⣀⣾⣿⣿⣿⣿
⣿⣿⣿⣿⣿⡏⠉⠛⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⣿
⣿⣿⣿⣿⣿⣿⠀⠀⠀⠈⠛⢿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠿⠛⠉⠁⠀⣿
⣿⣿⣿⣿⣿⣿⣧⡀⠀⠀⠀⠀⠙⠿⠿⠿⠻⠿⠿⠟⠿⠛⠉⠀⠀⠀⠀⠀⣸⣿
⣿⣿⣿⣿⣿⣿⣿⣷⣄⠀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣴⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⠏⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠠⣴⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⡟⠀⠀⢰⣹⡆⠀⠀⠀⠀⠀⠀⣭⣷⠀⠀⠀⠸⣿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⠃⠀⠀⠈⠉⠀⠀⠤⠄⠀⠀⠀⠉⠁⠀⠀⠀⠀⢿⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⢾⣿⣷⠀⠀⠀⠀⡠⠤⢄⠀⠀⠀⠠⣿⣿⣷⠀⢸⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⡀⠉⠀⠀⠀⠀⠀⢄⠀⢀⠀⠀⠀⠀⠉⠉⠁⠀⠀⣿⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣧⠀⠀⠀⠀⠀⠀⠀⠈⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢹⣿⣿
⣿⣿⣿⣿⣿⣿⣿⣿⣿⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⣿⣿

[-] realitista@lemm.ee 7 points 6 months ago* (last edited 6 months ago)

What we really want is confirmation that the articles were written and researched by humans. But failing that tell us that AI was used so we can avoid it.

[-] Auzy@beehaw.org 6 points 6 months ago

I feel like they should be labeling their content as NO AI generation.. Then actually do that

[-] CanadaPlus@lemmy.sdf.org 5 points 6 months ago

Once again, people are idiots.

[-] t3rmit3@beehaw.org 3 points 6 months ago

...for making AI-generated articles in the first place.

[-] YuzuDrink@beehaw.org 4 points 6 months ago

This headlines reads like it was AI generated.

this post was submitted on 15 Dec 2023
186 points (100.0% liked)

Technology

37208 readers
307 users here now

Rumors, happenings, and innovations in the technology sphere. If it's technological news or discussion of technology, it probably belongs here.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS