this post was submitted on 12 Apr 2025
29 points (91.4% liked)
Ask Lemmygrad
965 readers
3 users here now
A place to ask questions of Lemmygrad's best and brightest
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What people are really upset with is the way this technology is applied under capitalism. I see absolutely no problem with generative AI itself, and I'd argue that it can be a tool that allows more people to express themselves. People who argue against AI art tend to conflate the technical skill and the medium being used with the message being conveyed by the artist. You could apply same argument to somebody using a tool like Krita and claim it's not real art because the person using it didn't spend years learning how to paint using oils. It's a nonsensical argument in my opinion.
Ultimately, the art is in the eye of the beholder. If somebody looks at a particular image and that image conveys something to them or resonates with them in some way, that's what matters. How the image was generated doesn't really matter in my opinion. You could make a comparison with photography here as well. A photographer doesn't create the image that the camera captures, they have an eye for selecting scenes that are visually interesting. You can give a camera to a random person on the street, and they likely won't produce anything you'd call art. Yet, you give the same camera to a professional and you're going to get very different results.
Similarly, anybody can type some text into a prompt and produce some generic AI slop, but an artists would be able to produce an interesting image that conveys some message to the viewer. It's also worth noting that workflows in tools like ComfyUI are getting fairly sophisticated, and go far beyond typing a prompt to get an image.
My personal view is that this tech will allow more people to express themselves, and the slop will look like slop regardless whether it's made with AI or not. If anything, I'd argue that the barrier to making good looking images being lowered means that people will have to find new ways to make art expressive beyond just technical skill. This is similar to the way graphics in video games stopped being the defining characteristic. Often, it's indie games with simple graphics that end up being far more interesting.
Sorry, comrade, but all your pro-"AI" takes keep making me lose respect for you.
AI is entirely designed to take from human beings the creative forms of labor that give us dignity, happiness, human connectivity and cultural development. That it exists at all cannot be separated from the capitalist forces that have created it. There is no reality that exists outside the context of of capitalism where this would exist. In some kind of post-capitalist utopian fantasy, creativity would not need to be farmed at obscene industrial levels and human beings would create art as a means of natural human expression, rather than an expression of market forces.
There is no better way to describe the creation of these generative models than unprecidented levels of industrial capitalist theft that circumvents all laws that were intended to prevent capitalist theft of creative work. There is no version of this that exists without mass theft, or convincing people to give up their work to the slop machine for next to nothing.
LLMs vacuum up all traces of human thought, communication, interaction, creativity to produce something that is distinctly non-human -- an entity that has no rights; makes no demands; has no dignity; has no ethical capacity to refuse commands; and exists entirely to replace forms of labor which were only previously considered to be exclusively in the domain of human intelligence*.
The theft is a one-way hash of all recorded creative work, where attribution becomes impossible in the final model. I know decades of my own ethical FOSS work (to which I am fully ideologically committed) have been fed into these machines and are now being used to freely generate closed-sourced and unethical, exploitative code. I have no control of how the derived code is transfigured or what it is used for, despite the original license conditions.
This form of theft is so widespread and anonymized through botnets that it's almost impossible to track, and manifests itself as a brutal pandora's box attack on internet infrastructure on everything from personal websites, to open-source code repositories, to artwork and image hosts. There will never be accountability for this, even though we know which companies are selling the models, and the rest of us are forced to bear the cost. This follows the typical capitalist method of "socialize the cost, privatize the profit."* The general defense against these AI scouring botnets is to get behind the Cloudflare (and similar) honeypot mafias, which invalidate whatever security TLS was supposed to give users; and at the same time offers no guarantee whatsoever that the content won't be stolen, create even dependency on US owned (read: fully CIA backdoored) internet infrastructure, and extra costs/complexity just to alleviate some of the stress these fucking thieves put on our own machines.
These LLMs are not only built from the act of theft, but they are exclusively owned and controlled by capital to be sold as "products" at various endpoints. The billions of dollars going into this bullshit are not publicly owned or social investments, they are rapidly expanding monopoly capitalism. There is no realistic possibility of proletarianization of these existing "AI" frameworks in the context of our current social development.
LLMs are extremely inefficient and require more training input than a human child to produce an equivalent amount of learning. Humans are better at doing things that are distinctly human than machines are at emulating it. An the output "generative AI" produces is also inefficient, indicating and reinforcing inferior learning potential compared to humans. The technofash consensus is just that the models need more "training data". But when you feed the output of LLMs into training models, the output the model produces becomes worse to the point of insane garbage. This means that for AI/LLMs to improve, they need a constant expansion of consumption of human expression. These models need to actively feed off of us in order to exist, and they ultimately exist to replace our labor.
These "AI" implementations are all biased in favor of the class interests which own and control them :surprised-pikachu: Already, the qualitative output of "AI" is often grossly incorrect, rote, inane and absurd. But on top of that, the most inauthentic part of these systems are the boundaries, which are selectively placed on them to return specific responses. In the event that this means you cannot generate a sexually explicit images or video of someone/something without consent, sure, that's a minimum threshold that should be upheld, but because the overriding capitalist class interests in sexual exploitation we cannot reasonably expect those boundaries to be upheld. What's more concerning is the increase in capacity to manipulate, deceive and feed misinformation to people as objective truth. And this increased capacity for misinformation and control is being forcefully inserted into every corner of our lives we don't have total dominion over. That's not a tool, it's fucking hegemony.
The energy cost is immense. A common metric for the energy cost of using AI is how much ocean water is boiled to create immaterial slop. The cost of datacenters is already bad, most of which do not need to exist. Few things that massively drive global warming and climate change need to exist less than datacenters for shitcoin and AI (both of which have faux-left variations that get promoted around here). Microsoft, one of the largest and most unethical capital formations on earth, is re-opening Three Mile Island, the site of one of the worst nuclear disasters ~~ever~~ so far, as a private power plant, just to power dogshit "AI" gimmicks that are being forced on people through their existing monopolies. A little off-topic: Friendly reminder to everyone that even the "most advanced nuclear waste containment vessels ever created" still leak, as evidenced by the repeatedly failed cleanup attempts of the Hanford NPP in the US (which was secretly used to mass-produce material for US nuclear weapons with almost no regard to safety or containment.) There is no safe form of nuclear waste containment, it's just an extremely dangerous can being kicked down the road. Even if it were, re-activating private nuclear plants that previously had meltdowns just so bing can give you incorrect, contradictory, biased and meandering answers to questions which already had existing frameworks is not a thing to be celebrated, no matter how much of an proponent of nuclear energy we might be. Even of these things were ran on 100% greeen, carbon neutral energy souces, we do not have anything close to a surplus of that type of energy and every watt-hour of actual green energy should be replacing real dependencies, rather than massively expanding new ones.
As I suggest in earlier points, there is the issue with generative "AI" not only lacking any moral foundation, but lacking any capacity for ethical judgement of given tasks. This has a lot of implications, but I'll focus on software since that's in one of my domains of expertise and something we all need to care a lot more about. One of the biggest problems we have in the software industry is how totally corrupt its ethics are. The largest mass-surveillance systems ever known to humankind are built by technofascists and those who fear the lash of refusing to obey their orders. It vexes me that the code to make ride-sharing apps even more expensive when your phone battery is low, preying on your desperation, was written and signed-off on by human beings. My whole life I've taken immovable stands against any form of code that could be used to exploit users in any way, especially privacy. Most software is malicious and/or doesn't need to exist. Any software that has value must be completely transparent and fit within an ethical framework that protects people from abuse and exploitation. I simply will not perform any part of a task if it undermines privacy, security, trust, or in any way undermines proletarian class interests. Nor will I work for anyone with a history of such abuse. Sometimes that means organizing and educating other people on the project. Sometimes it means shutting the project down. Mostly it means difficult staying employed. Conversely, "AI" code generation will never refuse its true masters. It will never organize a walkout. It will never raise ethical objections to the tasks it's given. "AI" will never be held morally responsible for firing a gun on a sniper drone, nor can "AI" be meaningfully held responsible for writing the "AI" code that the sniper drone runs. Real human beings with class consciousness are the only line of defense between the depraved will of capital and that will being done. Dumb as it might sound, software is one such frontline we should be gaining on, not giving up.
I could go on for days on. AI is the most prominent form of enshittification we've experienced so far.
I think this person makes some very good points that mirror some of my own analysis and I recommend everyone watch it.
I appreciate and respect much of what you do. At the risk of getting banned: I really hate watching you promote AI as much as you do here; it's repulsive to me. The epoch of "Generative AI" is an act of class warfare on us. It exists to undermine the labour-value of human creativity. I don't think the "it's personally fun/useful for me" holds up at all to a Marxist analysis of its cost to our class interests.
Except that's not true at all. AI exists as open source and completely outside capitalism, it's also developed in countries like China where it is being primarily applied to socially useful purposes.
Again, the problem is entirely with capitalism here. Outside capitalism I see no reason for things like copyrights and intellectual property which makes the whole argument moot.
It's a tool that humans use. Meanwhile, the theft arguments have nothing to do with the technology itself. You're arguing that technology is being applied to oppress workers under capitalism, and nobody here disagrees with that. However, AI is not unique in this regard, the whole system is designed to exploit workers. 19th century capitalists didn't have AI, and worker conditions were far worse than they are today.
That's also false at this point. LLMs have become far more efficient in just a short time, and models that required data centers to run can now be run on laptops. The efficiency aspect has already improved by orders of magnitude, and it's only going to continue improving going forward.
That's really an argument for why this tech should be developed outside corps owned by oligarchs.
That's hasn't been true for a while now:
Again, it's a tool, any moral foundation would have to come from the human using the tool.
You appear to be conflating AI with capitalism, and it's important to separate these things. I encourage you to look at how this tech is being applied in China today, to see the potential it has outside the capitalist system.
The Marxist analysis isn't that "it's personally fun/useful for me", it's what this article outlines https://redsails.org/artisanal-intelligence/
Finally, no matter how much you hate this tech, it's not going away. It's far more constructive to focus the discussion on how it will be developed going forward and who will control it.
It is true. Those are the conditions and reason for the creation of AI artwork as it materially exists.
Specifically, generative "AI" art models, are created and funded by huge capital formations that exploit legal loopholes with fake universities, illicit botnets, and backroom deals with big tech to circumvent existing protections for artists. That's the material reality of where this comes from. The models themselves are are a black market.
I stan the PRC and the CPC. But China is not a post-capitalist society. It's in a stage of development that constrains capital, and that's a big monster to wrestle with. China is a big place and has plenty of problems and bad actors, and it's the CPC's job to keep them in line as best they can. It's a process. It's not inherent that all things that presently exist in such a gigantic country are anti-capitalist by nature. Citing "it exists in China" is not an argument.
And outside capitalism, creative workers don't have to sell their labor just to survive... Are we just doing bullshit utopianism now?
This exists to replace creative labor. That ship has already sailed. That's the reality you're in now. There's a distinction between a hammer and factory automation that relies on millions of workers to involuntarily train it in order to replace them.
Here I was thinking capitalism just began a week ago. I guess AI slop machines causing people material harm is cool then.
Seems like you should understand the difference between running a model vs. training a model. And the cost of the infinite cycle of vacuuming up more new data and retraining that's necessary for these things to significantly exist.
Okay, but that's not how and why these things to exist in our present reality. If there were unicorns, I'd like to ride one.
Again, for workers, there's a difference between a tool and a body replacement. The language marketing generative AI as tools is just there to keep you docile.
If this "tool" does replace work previously done by human beings (spoiler: it does), then the capacity for ethical objection to being given an unethical task is completely lost, vs. a human employee, who at least has a capacity to refuse, organize a walkout, or secretly blow the whistle. A human must at least be coerced to do something they find objectionable. Bosses are not alone in being responsible for delegating unethical tasks, those that perform those tasks share a disgrace, if not crime. Reducing the human moral complicity to an order of one is not a good thing.
It will go away when the earth becomes uninhabitable, which inches ever closer with every pile of worthless, inartistic slop the little piggies ask for. I guess people could reject this thing, but that would take some kind of revolution and who has time for that.
Its not just that you're constantly embracing generative AI, but you're arguing against all of it's critiques and ignoring the pain of those that are intentionally harmed in the real world.
Those are not the conditions for open source models which are developed outside corporate influence.
There is nothing unique here, capitalists already hold property rights on most creative work. If anything, open models are democratizing this wealth of art and making it available to regular people. It's kind of weird to cheer own for copyrights and corporate ownership here.
What I actually cited is that there are plenty of concrete examples of AI being applied in socially useful ways in China. This is demonstrably true. China is using AI everywhere from industry, to robotics, to healthcare, to infrastructure management, and many other areas where it has clear positive social impact.
So at this point you're arguing against automation in general, that's a fundamentally reactionary and anti-Marxist position.
Yes, it's a form of automation. It's a way to develop productive forces. This is precisely what the Red Sails article on artisanal intelligence addresses.
AI is a form of automation, and Marxists see automation as a tool for developing productive forces. You can apply this logic of yours to literally any piece of technology and claim that it's taking jobs away by automating them.
Training models is a one time endeavor, while running them is something that happens constantly. However, even in terms of training, the new approaches are far more efficient. DeepSeek managed to train their model at a cost of only 6 million, while OpenAI training cost hundreds of millions. Furthermore, once model is trained, it can be tuned and updated with methods like LoRA, so full expensive retraining is not required to extend their capabilities.
So, you're arguing that technological progress should just stop until capitalism is abolished or what exactly?
It's just automation, there's no fundamental difference here. Are you going to argue that fully automated dark factories in China are also bad because they're replacing human labor?
We have plenty of evidence that humans will do heinous things voluntarily without any coercion being required. This is not a serious argument.
This has absolutely nothing to do with AI. You're once again projecting social problems of how society is organized onto technology.
I'm arguing against false narratives that divert attention of the root problems, and that aren't constructive in nature.
I'm not "cheering for corporate ownership" here by any stretch of the imagination. The exact opposite, actually. But if you're just going to rely on hypotheticals and bad faith, then I'm done wasting my time on anything you have to say.
Little unsolicited advice: You're way too online and it shows; and that's never good for your mental health. Take some time off from being an epicbacon poster.
Personal attacks really underscore the quality of your character.