chobeat

joined 6 years ago
 

If roads operated like the Left, getting out of the door we would see mega-buses cutting slowly through the city without stopping while the passengers looked at their destination pass by. We would see small madmaxesque vehicles, driven by people on amphetamines shouting: “MAKE WAAAAAAAY I'M TURNIIIIIIIIIING” at every junction. Armored pink tanks would be patrolling the street and shooting their cannons at any vehicle around them for fear of being hit first. Some cars would have wheels on the roof. Half of the drivers would be driving on the right side of the street, yelling at the other half who believe they should be driving on the left. Also, none of the vehicles would operate on fuel because it ran out long ago: everybody would be using their feet like in the Flintstones.

[–] chobeat@lemmy.ml 20 points 2 years ago* (last edited 2 years ago)

It's not from me but from AlgorithmWatch, one of the most famous and respected NGOs in the field of Algorithmic accountability. They published plenty of stuff on these topics and human rights threats from these companies.

Also this is an ecosystem analysis of political positioning. These companies and think tanks are going on newspapers with their names to say we should panic about AI. It's not a secret, just open Google News and you fill find a landslide of news on these topics sponsored by these companies with a simple search.

[–] chobeat@lemmy.ml 1 points 2 years ago

it's answered in other comments

[–] chobeat@lemmy.ml 2 points 2 years ago (1 children)
[–] chobeat@lemmy.ml 1 points 2 years ago

automation never reduces jobs. It fragments them, it reduces their quality, it increases deskilling and replaceability. We are not going to work less as we never worked less thanks to automation. If we want to work less, we need unionization, not machines.

[–] chobeat@lemmy.ml 0 points 2 years ago

Microsoft bought OpenAI. The AI panic pushed by Sam Altman is sanctioned by Microsoft.

[–] chobeat@lemmy.ml 3 points 2 years ago* (last edited 2 years ago) (8 children)

They published a deliberately harmful tool against the advice of civil society, experts and competitors. They are not only reckless but tasked since their foundation with the mission to create chaos. Don't forget the idea behind OpenAI in the beginning was to damage the advantage that Google and Facebook had on AI by releasing machine learning technology in open source. They definitely did it and now they are expanding their goals. They are not in for the money (ChatGPT will never be profitable), they are playing a bigger game.

Pushing the AI panic is not just a marketing strategy but a way to build power. The more they are considered dangerous, the more regulations will be passed that will impact the whole sector. https://fortune.com/2023/05/30/sam-altman-ai-risk-of-extinction-pandemics-nuclear-warfare/

[–] chobeat@lemmy.ml 26 points 2 years ago (2 children)

In the picture you can see organizations moving in the public sphere around AI. On the left you have right-wing and libertarian think tanks, corporations and frontline actors that fuel a sense of panic around AI, either to sabotage their business competitors or to leverage this panic to project an idea of being sellers of a very powerful tool while at the same time deflecting responsibility. If the AI is dangerous and sentient, you won't care much about the engineers behind.

On the right you have several public orgs or NGOs operating in the field of algorithmic accountability, digital rights and so on. They push the opposite of the AI panic, pointing the finger at the corporations and powers that create and govern AI

[–] chobeat@lemmy.ml 8 points 2 years ago (8 children)

You might have heard of singularity, sentient AI, uprising of the ai, job losses due to automation. That's all propaganda that sits under the concept of AI panic.

[–] chobeat@lemmy.ml 1 points 2 years ago

Right now the whole model of generative AI and in general LLM is built on the assumption that training a machine learning model is not a problem for licenses, copyright and whatever. Obviously this is bringing to huge legal battles and before their outcome is clear and a new legal pratice or specific regulations are established in EU and USA, there's no point discussing licenses.

Also licenses don't prevent anything, they are not magic. If small or big AI companies feel safe in violating these laws or just profit enough to pay fines, they will keep doing it. It's the same with FOSS licenses: most small companies violate licenses and unless you have whistleblowers, you never find out. Even then, the legal path is very long. Only big corporate scared of humongous lawsuits really care about it, but small startups? Small consultancies? They don't care. Licenses are just a sign that says "STOP! Or go on, I'm a license, not a cop"

view more: ‹ prev next ›