Too bad
Why do they have free reign to store and use copyrighted material as training data? AIs don’t learn as a human would, and comparisons can’t be made between the learning processes.
This is a most excellent place for technology news and articles.
Too bad
Why do they have free reign to store and use copyrighted material as training data? AIs don’t learn as a human would, and comparisons can’t be made between the learning processes.
I wonder if the act of picking cotton was copyrighted, would we had got the cotton gin? We have automated most non-creative pursues and displaced their workers. Is it because people can take joy out of creative pursues that we balk at the automation? If you have a particular style in picking items to fulfill Amazon orders, should that be copyrighted and protected from being used elsewhere?
Bro the cotton gin literally led to millions of black slaves because now it was profitable. Worst example possible
i literally coughed i laughed so hard
This is the best summary I could come up with:
The developer OpenAI has said it would be impossible to create tools like its groundbreaking chatbot ChatGPT without access to copyrighted material, as pressure grows on artificial intelligence firms over the content used to train their products.
Chatbots such as ChatGPT and image generators like Stable Diffusion are “trained” on a vast trove of data taken from the internet, with much of it covered by copyright – a legal protection against someone’s work being used without permission.
AI companies’ defence of using copyrighted material tends to lean on the legal doctrine of “fair use”, which allows use of content in certain circumstances without seeking the owner’s permission.
John Grisham, Jodi Picoult and George RR Martin were among 17 authors who sued OpenAI in September alleging “systematic theft on a mass scale”.
Getty Images, which owns one of the largest photo libraries in the world, is suing the creator of Stable Diffusion, Stability AI, in the US and in England and Wales for alleged copyright breaches.
The submission said it backed “red-teaming” of AI systems, where third-party researchers test the safety of a product by emulating the behaviour of rogue actors.
The original article contains 530 words, the summary contains 190 words. Saved 64%. I'm a bot and I'm open source!
Yeah, I also have no way to own a billion dollar. Sucks for both of us...
My hot take is that it's not like most of those independent artists are getting compensated fairly by the companies that own them anyway if at all. Stealing ai training content is just stealing from corporations. Corporations who are probably politically fighting to keep things worse for the average person in your country.
Theft is "a crime" but I never saw anyone complaining about how unfair it was all those times I myself got fucked over by google bullshitting their way out of giving me my ad revenue. If normal people can't profit from stuff like this, we shouldn't be doing anything to protect the profits of evil corporations.
Sounds like a fatal problem. That's a shame.