this post was submitted on 29 Feb 2024
351 points (100.0% liked)

Actually Useful AI

1999 readers
7 users here now

Welcome! ๐Ÿค–

Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, "actually useful" for developers and enthusiasts alike.

Be an active member! ๐Ÿ””

We highly value participation in our community. Whether it's asking questions, sharing insights, or sparking new discussions, your engagement helps us all grow.

What can I post? ๐Ÿ“

In general, anything related to AI is acceptable. However, we encourage you to strive for high-quality content.

What is not allowed? ๐Ÿšซ

General Rules ๐Ÿ“œ

Members are expected to engage in on-topic discussions, and exhibit mature, respectful behavior. Those who fail to uphold these standards may find their posts or comments removed, with repeat offenders potentially facing a permanent ban.

While we appreciate focus, a little humor and off-topic banter, when tasteful and relevant, can also add flavor to our discussions.

Related Communities ๐ŸŒ

General

Chat

Image

Open Source

Please message @sisyphean@programming.dev if you would like us to add a community to this list.

Icon base by Lord Berandas under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
top 14 comments
sorted by: hot top controversial new old
[โ€“] SuperSynthia@lemmy.world 66 points 8 months ago (1 children)

This is the best news. Itโ€™s one thing for AI to assist, itโ€™s another to replace. Fuck em

[โ€“] SubArcticTundra@lemmy.ml 13 points 8 months ago (1 children)

Agreed. There's a difference between using AI to assist workers and using it to replace them outright

[โ€“] DragonTypeWyvern@literature.cafe 2 points 8 months ago (1 children)

If it was actually AI (or good) it'd be different.

I'm ashamed of millennial gamers for not forcing the terminology to be VI. Did we learn nothing from Mass Effect?

[โ€“] funnystuff97@lemmy.world 7 points 8 months ago* (last edited 8 months ago)

It's a marketing thing. Calling LLM's "AI" was a very intentional move, to evoke that sense of hyperintelligence. Whether it's truly an artifical intelligence up for debate, but calling them AI absolutely helped them gain attention (good and bad).

Also, obligatory "shut up Avina".

[โ€“] BatmanAoD@programming.dev 53 points 8 months ago (2 children)

It's actually quite amusing to me that Wikipedia is an authority on "reliability". It makes perfect sense, but can you imagine explaining that to a public school teacher twenty years ago?

[โ€“] Rodeo@lemmy.ca 1 points 8 months ago

Try explaining anything to a public school teacher lol. They always think they know better.

[โ€“] Whirlybird@aussie.zone -1 points 8 months ago

Theyโ€™re not an authority though. They might want to be but theyโ€™re not.

[โ€“] Kissaki@programming.dev 21 points 8 months ago

January 2023, Futurism brought widespread attention to the issue and discovered that the articles were full of plagiarism and mistakes. [โ€ฆ] After the revelation, CNET management paused the experiment, but the reputational damage had already been done.

So the "AI experiment" is not active anymore. But the damage is already done.

It was also new to me that Wikipedia puts time-based reliability qualifiers on sources. It makes sense of course. And this example shows how a source can be good and reliable in the past, but not anymore - and differentiating that is important and necessary.

[โ€“] autotldr@lemmings.world 15 points 8 months ago (3 children)

This is the best summary I could come up with:


Wikipedia has downgraded tech website CNET's reliability rating following extensive discussions among its editors regarding the impact of AI-generated content on the site's trustworthiness, as noted in a detailed report from Futurism.

The decision reflects concerns over the reliability of articles found on the tech news outlet after it began publishing AI-generated stories in 2022.

Shortly after the CNET news broke in January 2023, Wikipedia editors began a discussion thread on the Reliable Sources project page about the publication.

"CNET, usually regarded as an ordinary tech RS [reliable source], has started experimentally running AI-generated articles, which are riddled with errors," wrote a Wikipedia editor named David Gerard.

Futurism reports that the issue with CNET's AI-generated content also sparked a broader debate within the Wikipedia community about the reliability of sources owned by Red Ventures, such as Bankrate and CreditCards.com.

In response to the downgrade and the controversies surrounding AI-generated content, CNET issued a statement that claims that the site maintains high editorial standards.


The original article contains 528 words, the summary contains 163 words. Saved 69%. I'm a bot and I'm open source!

[โ€“] ArmoredThirteen@lemmy.ml 22 points 8 months ago

Read the room, bot

[โ€“] Lemminary@lemmy.world 21 points 8 months ago

Yer unreliable! Ya hear that, m8?

[โ€“] TrickDacy@lemmy.world 3 points 8 months ago

Honestly they should just create a new category called AI generated. Reliability in journalism should only be for humans.

[โ€“] Kissaki@programming.dev 5 points 8 months ago

CNET began publishing articles written by an AI model under the byline "CNET Money Staff".

(emphasis mine)

What a label. I assume that "byline" was their "article author"? "Money Staff". Baffling.