this post was submitted on 25 Nov 2024
94 points (99.0% liked)

technology

23389 readers
20 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS
 

Latest move to tighten regulation comes amid soaring use of algorithms for content recommendation, e-commerce and gig work distribution

Tech operators in China have been given a deadline to rectify issues with recommendation algorithms, as authorities move to revise cybersecurity regulations in place since 2021.

A three-month campaign to address “typical issues with algorithms” on online platforms was launched on Sunday, according to a notice from the Communist Party’s commission for cyberspace affairs, the Ministry of Industry and Information Technology, and other relevant departments. The campaign, which will last until February 14, marks the latest effort to curb the influence of Big Tech companies in shaping online views and opinions through algorithms – the technology behind the recommendation functions of most apps and websites.

System providers should avoid recommendation algorithms that create “echo chambers” and induce addiction, allow manipulation of trending items, or exploit gig workers’ rights, the notice said.

They should also crack down on unfair pricing and discounts targeting different demographics, ensure “healthy content” for elderly and children, and impose a robust “algorithm review mechanism and data security management system”.

Tech companies have been told to “conduct in-depth self-examination and rectification to further improve the security capabilities of algorithms” by the end of the year.

you are viewing a single comment's thread
view the rest of the comments

Hmm, interesting they use the term echo chambers here. Everything here sounds good, but they continue the demonization of "echo chambers", which personally I find unwarranted. The issue is the spread of hateful content and misinformation, which they do appear to be addressing. Personally I'd like to get rid of algorithmic feeds altogether, for that purpose. But filter bubbles / echo chambers are not inextricably linked to spreading hateful content and misinformation though. A marginalized individual not wanting to interact with those who actively wish them harm shouldn't be forced to. And hell, I might even be okay with focusing just on misinformation. An explicit threat towards -phobes is always justified imo.