Capitalism = runaway immoral human greed
Unpopular Opinion
Welcome to the Unpopular Opinion community!
How voting works:
Vote the opposite of the norm.
If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.
Guidelines:
Tag your post, if possible (not required)
- If your post is a "General" unpopular opinion, start the subject with [GENERAL].
- If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].
Rules:
1. NO POLITICS
Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.
2. Be civil.
Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Shitposts and memes are allowed but...
Only until they prove to be a problem. They can and will be removed at moderator discretion.
5. No trolling.
This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.
Instance-wide rules always apply. https://legal.lemmy.world/tos/
This is actually an unpopular opinion sadly, on Lemmy as well in the outside world. A rare case of a post on this community where I ca upvote both because it's unpopular and I agree with it.
Depends on where you live I suppose. Irrational AI hate is something I only really encounter online. Then again my country has pretty good worker protections, so there's less reason to be afraid of AI.
I don’t know, there’s plenty of anti-billionaire sentiment, fuck_cars is basically anti-capitalist, and most of the environmentalists get to the same conclusion pretty quickly too.
The realists (and cynics in some cases) just know that it’s going to take a huge process to shift us away. I’m a realist and am opting for a progressive takeover that leads to taxing billionaires, carbon/pollution, and dangerous vehicles (among other clear hazards) out of existence.
But when I’m feeling cynical, I get worried that it’s going to take a war to happen, and I hope for my son’s sake that doesn’t happen.
Why not both?
If used correctly newer generation AIs could be an absolute game changer In fields like medicine, finances, computing, public transport, customer services etc.
But what they'll actually be used for is to make more money.
If you take capitalism out of the equation they could be an amazing force for good. But under our current system they won't.
They're also used to make less money -- by the people whose jobs will be replaced so that those up top can make more money.
AI at this point exacerbates the wealth gap.
But they're also used to make open source software that gives people the tools they need to work together on community design protects which can free everyone from capitalism.
I've been using them to help coding and they're really useful even at this early stage, all the people i follow making ai tools and similar are using ai too which is one of the reasons we're getting so many great free to use tools. Ai design tools will be used to make flosh devices that can be fabricated locally using ai assisted tooling, it will totally change the entire structure of society and improve things significantly
because AI is just a tool.
it could be used for good.
Because when greed and money aren't in the equation, AI is pretty useful and for most people it isn't costing them anything.
It’s pretty evident that AI is incompatible with capitalism, but most people direct their anger at AI. Late-stage capitalism is the problem, not automation. I upvoted because I think this is actually an unpopular opinion factoring in the world population rather than just Lemmy.
¿Por que no los dos?
This post is written by an AI. Lmao
"Are you scared of an AI world? You're already in it."
AI outside of capitalism is still incredibly dangerous. It's all the baises that create the world we have today but on steroids. Take all the injustices against minority peoples today and scale it up to however much compute you have.
It's completely naive to think that AI will solve the world's problems if that pesky capitalism would get out of the way. But this website is full of tech bros, so it's impossible to get past that.
Also, being angry at capitalism doesn't pay the rent. I can't boycott capitalism. I can use my small power under capitalism to boycot your shitty ai.
Mhmm. Here's the uncensored anti-woke AI Elon tried to create answering Twitter blue subscribers questions:
Or
Or
Yeah, so horribly biased and terrible...
As much as I hate AI run by megacorporations, I don’t think AI run by a communist government would be any better.
Maybe work on proving "AI" is actually a technological advancement instead of an overhyped plagiarism machine first.
LLMs real power isn't generating fresh content it's their ability to understand language.
Using one to summarise articles gives incredibly good results.
I use Bing enterprise everyday at work as a programmer. It makes information gathering and learning so much easier.
It's decent at writing code but that's not the main selling point in my opinion.
Plus they are general models to show the capabilities. Once the tech is more advanced you can train models for specific purposes.
It seems obvious an AI that can do creative writing and coding wouldn't be as good at either.
These are generation 0. There'll be a lot of advances coming.
Also LLMs are a very specific type of machine learning and any advances will help the rest of the field. AI is already widely used in many fields.
LLMs don't "understand" anything. They're just very good at making it look like they sort of do
They also tend to have difficulty giving the answer "I don't know" and will confidently assert something completely incorrect
And this is not generation 0. The field of AI has been around for a long time. It's just now becoming widespread and used where the avg person can see it
LLMs don't "understand" anything. They're just very good at making it look like they sort of do
If they're very good at it, then is there functionally any difference? I think the definition of "understand" people use when railing against AI must include some special pleading that gates off anything that isn't actually intelligent. When it comes to artificial intelligence, all I care about is if it can accurately fulfill a prompt or answer a question, and in the cases where it does do that accurately I don't understand why I shouldn't say that it seems to have "understood the question/prompt."
They also tend to have difficulty giving the answer "I don't know" and will confidently assert something completely incorrect
I agree that they should be more capable of saying I don't know, but if you understand the limits of LLMs then they're still really useful. I can ask it to explain math concepts in simple terms and it makes it a lot easier and faster to learn whatever I want. I can easily verify what it said either with a calculator or with other sources, and it's never failed me on that front. Or if I'm curious about a religion or what any particular holy text says or doesn't say, it does a remarkable job giving me relevant results and details that are easily verifiable.
But I'm not going to ask GPT3.5 to play chess with me because I know it's going to give me blatantly incoherent and illegal moves. Because, while it does understand chess notation, it doesn't understand how to keep track of the pieces like GPT4 does.
If you can easily validate any of the answers. And you have to to know if they're actually correct wouldn't it make more sense to just skip the prompt and do the same thing you would to validate?
I think LLMs have a place. But I don't think it's as broad as people seem to think. It makes a lot of sense for boilerplate for example, as it just saves mindless typing. But you still need to have enough knowledge to validate it
Can you prove that human intelligence isn't an overhyped plagiarism machine?
Furthermore, simple probability calculations indicate that GPT-4's reasonable performance on k=5 is suggestive of going beyond "stochastic parrot" behavior (Bender et al., 2021), i.e., it combines skills in ways that it had not seen during training.
Do these networks just memorize a collection of surface statistics, or do they rely on internal representations of the process that generates the sequences they see? We investigate this question by applying a variant of the GPT model to the task of predicting legal moves in a simple board game, Othello. Although the network has no a priori knowledge of the game or its rules, we uncover evidence of an emergent nonlinear internal representation of the board state.
- https://arxiv.org/abs/2210.13382 (replicated here and here (with chess))
So you already have research showing that GPT LLMs are capable of modeling aspects of training data at much deeper levels of abstraction than simply surface statistics of words and research showing that the most advanced models are already generating novel and new outputs distinct from anything that would be in the training data by virtue of the complexity of the number of different abstract concepts it combines from what was learned in the training data.
Like - have you actually read any of the ongoing actual research on the field at all? Or just articles written by embittered people who are generally misunderstanding the technology (for example, if you ever see someone refer to them as Markov chains, that person has no idea what they are talking about given the key factor of the transformer model is the self-attention mechanism which negates the Markov property characterizing Markov chains in the first place).
instead of an overhyped plagiarism machine first.
If I paint an Eiffel Tower from memory, am I plagiarizing?
If it's not plagiarism when humans do it, it's not plagiarism when a machine does it.
Why not instead, the people responsible?
Please explain how in a non-capitalist world, AI would never be used for the sorts of things you dislike AI being used for such as job elimination. You think nobody will realize that it can be used to produce lots of art, for example?
In this non-capitalist world you're thinking of, would we have any automation? Like do we have harvester combines, or is it still 35 people breaking their backs to cut and thresh an acre of wheat?
If the means of production are collectively owned, and thus directed towards the good of society, job elimination isn't as much of a problem.
Socialists are huge proponents of automation, because instead of being used to cut jobs for profit, dirty and hard jobs can be eliminated.
Then why are we angry at AI in this discussion?
Job elimination is a problem in capitalism because workers need jobs to survive. In a socialist society, job elimination can be a good thing, as it allows us to either increase access to resources or reduce how much time people need to work without dispossessing the people whose jobs were eliminated.
The difference is that, in capitalism, workers only survive by proving their usefulness to capitalists making money. Automation is thus a threat to worker bargaining power. If the means of production were socially owned (through for example government run utilities or worker coops), worker bargaining power is then through a vote or through ownership. It is possible to by default distribute the spoils of automation rather than concentrate them in the hands of capitalists.