this post was submitted on 29 Jun 2023
9 points (100.0% liked)

Technology

37699 readers
278 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Article Link from Nature

top 2 comments
sorted by: hot top controversial new old
[–] CanadaPlus@lemmy.sdf.org 0 points 1 year ago* (last edited 1 year ago) (1 children)

Ah yes, the old AI alignment vs. AI ethics slapfight.

How about we agree that both are concerning?

[–] lemmyng@beehaw.org 2 points 1 year ago

Both are concerning, but as a former academic to me neither of them are as insidious as the harm that LLMs are already doing to training data. A lot of corpora depend on collecting public online data to construct data sets for research, and the assumption is that it's largely human-generated. This balance is about to shift, and it's going to cause significant damage to future research. Even if everyone agreed to make a change right now, the well is already poisoned. We're talking the equivalent of the burning of Alexandria for linguistics research.