At least anecdotally, Andreas over at 82MHz.net tried running a AI model locally on his laptop and it took over 10 minutes for just one prompt.
OK just the 4th sentence clearly shows this person has no clue what they're talking about.
This is a most excellent place for technology news and articles.
At least anecdotally, Andreas over at 82MHz.net tried running a AI model locally on his laptop and it took over 10 minutes for just one prompt.
OK just the 4th sentence clearly shows this person has no clue what they're talking about.
Yep, clueless. I stopped reading at that point. For the audience, large language models come in all sizes and you can run some small but useful ones fairly quickly even without a GPU. They keep getting more capable for the size as well. Remember the uproar about Deepseek R1? Well, progress hasn’t stopped.
It's not even that. It's like trying to run an AAA game on a 10 year old laptop and complaining the game is garbage because your frame rates are too low.
These endless "AI bad" articles are annoying. It's just click bait at this point.
Energy use: false. His example was someone using a 13 year old laptop to get a result and then extrapolating energy use from that. Running ai locally is the same energy as playing a 3d AAA game for the same time. No one screams about the energy footprint of playing games.
AAA game development energy use ( thousands of developers all with watt burning gpus spending years creating assets) dwarfs AI model building energy use.
Copyright, yes it's a problem and should be fixed. But stealing is part of capitalism. Google search itself is based on stealing content and then selling ads to find that content. The entire "oh we might send some clicks your way that you might be able to compensated for" is backwards.
His last reason was new and completely absurd: he doesn't like AI because he doesn't like Musk. Given the public hatred between OpenAI and Musk it's bizarre. Yes Musk has his own AI. But Musk also has electric cars, and space travel. Does the author hate all EV's too? If course not, that argument was added by the author as a troll to get engagement.
OP said "people like Musk" not just Musk. He's just the easiest example to use.
Copyright, yes it's a problem and should be fixed.
The quick fix: stick to open-source like Jan.ai.
Long-term solution: make profiting AI companies pay for UBI. How to actually calculate that, though, is anyone's guess...
Don't make "profiteering AI companies" pay for UBI. Make all companies pay for UBI. Just tax their income and turn it around into UBI payments.
One of the major benefits of UBI is how simple it is. The simpler the system is the harder it is to game it. If you put a bunch of caveats on which companies pay more or pay less based on various factors, then there'll be tons of faffing about to dodge those taxes.
Running ai locally is the same energy as playing a 3d AAA game for the same time
I wonder if they're factoring in the energy usage to train the model. That's what consumes most of the power.
Hi, I'm the writer of the article.
To be clear I am not trying to attack anyone who uses AI, just explain why I don't use it myself.
Energy use: false
I don't dispute that AI energy is/might be comparable to other things like making a AAA game (or other things like traveling). I also don't want to say that 'AI is bad'. However if I used AI more, I would still play the same amount of video games, thus increasing the total energy use. If I was to use AI it would probably replace lower energy activities like writing or searching the internet.
Copyright, yes it’s a problem and should be fixed. But stealing is part of capitalism. Google search itself is based on stealing content and then selling ads to find that content.
I agree with you that the copyright angle is a bad way to attack AI, however AI does seem like it 'gives back' to creatives even less than other things like search as well as actively competing with them in a way that search doesn't. This isn't my main objection so I don't really want to focus on it.
His last reason was new and completely absurd
I considered leaving out the "I just don't like it" reason but I wanted to be completely transparent that my decision isn't objective. This is only one reason out of many - if it was just this problem then I would be quicker to ignore it. I get your point about EV's - I don't hate them despite the fact that Musk is/was an advocate for them. If I was to use a AI it would be something like Jan.ai which @Flagstaff@programming.dev mentioned.
Do you agree with me on my other main point on reliability?
I agree, there are still good reasons not to use commercial AI products though.
https://www.mintpressnews.com/trump-killed-minerva-stargate-make-secret-more-dangerous/289313
A new AI/informational war arms race? Whatever, because...
I just don't like it
Copyright, yes it's a problem and should be fixed.
No, this is just playing into another of the common anti-AI fallacies.
Training an AI does not do anything that copyright is even involved with, let alone prohibited by. Copyright is solely concerned with the copying of specific expressions of ideas, not about the ideas themselves. When an AI trains on data it isn't copying the data, the model doesn't "contain" the training data in any meaningful sense. And the output of the AI is even further removed.
People who insist that AI training is violating copyright are advocating for ideas and styles to be covered by copyright. Or rather by some other entirely new type of IP protection, since as I said this is nothing at all like what copyright already deals with. This would be an utterly terrible thing for culture and free expression in general if it were to come to pass.
I get where this impulse comes from. Modern society has instilled a general sense that everything has to be "owned" by someone, even completely abstract things. Everyone thinks that they're owed payment for everything that they can possibly demand payment for, even if it's something that just yesterday they were doing purely for fun and releasing to the world without a care. There's this base impulse of "mine! Therefore I must control it!" Ironically, it's what leads to the capitalist hellscape so many people are decrying at the same time they demand more.
When an AI trains on data it isn’t copying the data, the model doesn’t “contain” the training data in any meaningful sense.
And what's your evidence for this claim? It seems to be false given the times people have tricked LLMs into spitting out verbatim or near-verbatim copies of training data. See this article as one of many examples out there.
People who insist that AI training is violating copyright are advocating for ideas and styles to be covered by copyright.
Again, what's the evidence for this? Why do you think that of all the observable patterns, the AI will specifically copy "ideas" and "styles" but never copyrighted works of art? The examples from the above article contradict this as well. AIs don't seem to be able to distinguish between abstract ideas like "plumbers fix pipes" and specific copyright-protected works of art. They'll happily reproduce either one.
That article is over a year old. The NYT case against OpenAI turned out to be quite flimsy, their evidence was heavily massaged. What they did was pick an article of theirs that was widely copied across the Internet (and thus likely to be "overfit", a flaw in training that AI trainers actively avoid nowadays) and then they'd give ChatGPT the first 90% of the article and tell it to complete the rest. They tried over and over again until eventually something that closely resembled the remaining 10% came out, at which point they took a snapshot and went "aha, copyright violated!"
They had to spend a lot of effort to get that flimsy case. It likely wouldn't work on a modern AI, training techniques are much better now. Overfitting is better avoided and synthetic data is used.
Why do you think that of all the observable patterns, the AI will specifically copy "ideas" and "styles" but never copyrighted works of art?
Because it's literally physically impossible. The classic example is Stable Diffusion 1.5, which had a model size of around 4GB and was trained on over 5 billion images (the LAION5B dataset). If it was actually storing the images it was being trained on then it would be compressing them to under 1 byte of data.
AIs don't seem to be able to distinguish between abstract ideas like "plumbers fix pipes" and specific copyright-protected works of art.
This is simply incorrect.
The NYT was just one example. The Mario examples didn't require any such techniques. Not that it matters. Whether it's easy or hard to reproduce such an example, it is definitive proof that the information can in fact be encoded in some way inside of the model, contradicting your claim that it is not.
If it was actually storing the images it was being trained on then it would be compressing them to under 1 byte of data.
Storing a copy of the entire dataset is not a prerequisite to reproducing copyright-protected elements of someone's work. Mario's likeness itself is a protected work of art even if you don't exactly reproduce any (let alone every) image that contained him in the training data. The possibility of fitting the entirety of the dataset inside a model is completely irrelevant to the discussion.
This is simply incorrect.
Yet evidence supports it, while you have presented none to support your claims.
Learning what a character looks like is not a copyright violation. I'm not a great artist but I could probably draw a picture that's recognizably Mario, does that mean my brain is a violation of copyright somehow?
Yet evidence supports it, while you have presented none to support your claims.
I presented some, you actually referenced what I presented in the very comment where you're saying I presented none.
You can actually support your case very simply and easily. Just find the case law where AI training has been ruled a copyright violation. It's been a couple of years now (as evidenced by the age of that news article you dug up), yet all the lawsuits are languishing or defunct.
Learning what a character looks like is not a copyright violation
And nobody claimed it was. But you're claiming that this knowledge cannot possibly be used to make a work that infringes on the original. This analogy about whether brains are copyright violations make no sense and is not equivalent to your initial claim.
Just find the case law where AI training has been ruled a copyright violation.
But that's not what I claimed is happening. It's also not the opposite of what you claimed. You claimed that AI training is not even in the domain of copyright, which is different from something that is possibly in that domain, but is ruled to not be infringing. Also, this all started by you responding to another user saying the copyright situation "should be fixed". As in they (and I) don't agree that the current situation is fair. A current court ruling cannot prove that things should change. That makes no sense.
Honestly, none of your responses have actually supported your initial position. You're constantly moving to something else that sounds vaguely similar but is neither equivalent to what you said nor a direct response to my objections.
But you're claiming that this knowledge cannot possibly be used to make a work that infringes on the original.
I am not. The only thing I've been claiming is that AI training is not copyright violation, and the AI model itself is not copyright violation.
As an analogy, you can use Photoshop to draw a picture of Mario. That does not mean that Photoshop is violating copyright by existing, and Adobe is not violating copyright by having created Photoshop.
You claimed that AI training is not even in the domain of copyright, which is different from something that is possibly in that domain, but is ruled to not be infringing.
I have no idea what this means.
I'm saying that the act of training an AI does not perform any actions that are within the realm of the actions that copyright could actually say anything about. It's like if there's a law against walking your dog without a leash, and someone asks "but does it cover aircraft pilots' licenses?" No, it doesn't, because there's absolutely no commonality between the two subjects. It's nonsensical.
Honestly, none of your responses have actually supported your initial position.
I'm pretty sure you're misinterpreting my position.
The "copyright situation" regarding an actual literal picture of Mario doesn't need to be fixed because it's already quite clear. There's nothing that needs to change to make an AI-generated image of Mario count as a copyright violation, that's what the law already says and AI's involvement is irrelevant.
When people talk about needing to "change copyright" they're talking about making something that wasn't illegal previously into something that is illegal after the change. That's presumably the act of training or running an AI model. What else could they be talking about?
The only thing I’ve been claiming is that AI training is not copyright violation
What's the point? Are you talking specifically about some model that was trained and then put on the shelf to never be used again? Cause that's not what people are talking about when they say that AI has a copyright issue. I'm not sure if you missed the point or this is a failed "well, actually" attempt.
When an AI trains on data it isn’t copying the data, the model doesn’t “contain” the training data in any meaningful sense.
I'd say it can be a problem because there have been examples of getting AIs to spit out entire copyrighted passages. Furthermore, some works can have additional restrictions on their use. I couldn't for example train an AI on Linux source code, have it spit out the exact source code, then slap my own proprietary commercial license on it to bypass GPL.
If a larger youtuber steals the script and content of a video from a smaller youtuber, as far as i know, it wouldnt be illegal. It would hurt the smaller youtuber and benefit the larger one. It would make people mad if they found out about it, but there wouldnt be people who propose changing copyright law to include ideas
I am using youtubers as the example because this happened and a lot of people got angry and its similar to the AI situation
People can complain that something unethical is legal without proposing new copyright laws without flaws
Sure. But that's not what's happening when an AI is trained. It's not "stealing" the script or content of the video, it's analyzing them.
I find it funny that in the year 2000 while attending philosophy at University of Copenhagen I predicted strong AI around 2035. This was based on calculations of computational power, and estimates of software development trailing a bit.
At the time I had already been interested in AI development and matters of consciousness for many years. And I was a decent programmer. I already made self modifying code back in 1982. So I made this prediction at a time where AI wasn't a very popular topic, and in the middle of a decades long futile desert walk without much progress.
And for 15 about years, very little continued to happen. It was pretty obvious the approach behind for instance Deep Blue wasn't the way forward. But that seemed to be the norm for a long time.
But it looks to me that the understanding of how to build a strong AI is much much closer now, as I expected. We might actually be halfway there!
I think we are pretty close to having the computational power needed now in AI specific datacenter clusters, but the software isn't quite there yet.
I'm honestly not that interested in the current level of AI, although LLM can yield very impressive results at times, it's also flawed, and I see it as somewhat transitional.
For instance partially self driving cars are kind of irrelevant IMO. But truly self driving cars will make all the difference regarding how useful it is, and be a cool achievement for current level of AI evolution when achieved.
So current level AI can be useful, but when we achieve strong AI it will make all the difference!
Edit PS:
Obviously my prediction relied on the assumption that brains and consciousness are natural phenomena, that don't require a god. An assumption I personally consider a fact.
I find it funny that in the year 2000 while attending philosophy at University of Copenhagen I predicted strong AI around 2035.
That seems to be aging well. But what is the definition of "strong AI"?
Self aware consciousness on a human level. So it's still far from a sure thing, because we haven't figured consciousness out yet.
But I'm still very happy with my prediction, because AI is now at a way more useful and versatile level than ever, the use is already very widespread, and the research and investments have exploded the past decade. And AI can do things already that used to be impossible, for instance in image and movie generation and manipulation.
But I think the code will be broken soon, because self awareness is a thing of many degrees. For instance a dog is IMO obviously self aware, but it isn't universally recognized, because it doesn't have the same degree of selv awareness humans have.
This is a problem that dates back to 17th century and Descartes, who claimed for instance horses and dogs were mere automatons, and therefore couldn't feel pain.
This of course completely in line with the Christian doctrine that animals don't have souls.
But to me it seems self awareness like emotions don't have to start at human level, it can start at a simpler level, that then can be developed further.
PS:
It's true animals don't have souls, in the sense of something magical provided by a god, because nobody has. Souls are not necessary to explain self awareness or consciousness or emotions.
Self aware consciousness on a human level.
How do you operationally define consciousness?
To understand what "I think therefore I am" means, is a very high level of consciousness.
At lower levels things get more complicated to explain.
Good question.
Obviously the Turing test doesn't cut it, which I suspected already back then. And I'm sure when we finally have a self aware conscious AI, it will be debated violently.
We may think we have it before it's actually real, some claim they believe some of the current systems display traits of consciousness already. I don't believe that it's even close yet though.
As wrong as Descartes was about animals, he still nailed it with "I think therefore I am" (cogito, ergo sum) https://www.britannica.com/topic/cogito-ergo-sum.
Unfortunately that's about as far as we can get, before all sorts of problems arise regarding actual evidence. So philosophically in principle it is only the AI itself that can know for sure if it is truly conscious.
All I can say is that with the level of intelligence current leading AI have, they make silly mistakes that seems obvious if it was really conscious.
For instance as strong as they seem analyzing logic problems, they fail to realize that 1+1=2 <=> 2=1+1.
Such things will of course be ironed out, and maybe this on is already. But it shows the current model, isn't good enough for the basic comprehension I would think would follow from consciousness.
Luckily there are people that know much more about this, and it will be interesting to hear what they have to say, when the time arrives. 😀
Obviously the Turing test doesn’t cut it, which I suspected already back then.
The Turing test is misunderstood a lot. Here's Wikipedia on the Turing test:
[Turing] opens with the words: "I propose to consider the question, 'Can machines think?'" Because "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words". Turing describes the new form of the problem in terms of a three-person party game called the "imitation game", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: "Are there imaginable digital computers which would do well in the imitation game?
One should bear in mind that scientific methodology was not very formalized at the time. Today, it is self-evident to any educated person that the "judges" would have to be blinded, which is the whole point of the text chat setup.
What has been called "Turing test" over the years is simultaneously easier and harder. Easier, because these tests usually involved only a chat without any predetermined task that requires thinking. It was possible to pass without having to think. But also harder, because thinking alone is not sufficient. One has to convince an interviewer that one is part of the in-group. It is the ultimate social game; indeed, often a party game (haha, I made a pun). Turing himself, of course, eventually lost such a game.
All I can say is that with the level of intelligence current leading AI have, they make silly mistakes that seems obvious if it was really conscious.
For instance as strong as they seem analyzing logic problems, they fail to realize that 1+1=2 <=> 2=1+1.
This connects consciousness to reasoning ability in some unclear way. The example seems unfortunate, since humans need training to understand it. Most people in developed countries would agree that the equivalence is formally correct, but very few would be able to prove it. Most wouldn't even know how to spell Peano Axiom; nor would they even try (Oh, luckier bridge and rail!)
I know about the Turing test, it's what we were taught about and debated in philosophy class at University of Copenhagen, when I made my prediction that strong AI would probably be possible about year 2035.
to exhibit intelligent behaviour equivalent to that of a human
Here equivalent actually means indistinguishable from a human.
But as a test of consciousness that is not a fair test, because obviously a consciousness can be different from a human, and our understanding of how a simulation can fake something without it being real is also a factor.
But the original question remains, how do we decide it's not conscious if it responds as if it is?
This connects consciousness to reasoning ability in some unclear way.
Maybe it's unclear because you haven't pondered the connection? Our consciousness is a very big part of our reasoning, consciousness is definitely guiding our reasoning. And our consciousness improve the level of reasoning we are capable of.
I don't see why the example requiring training for humans to understand is unfortunate. A leading AI has way more training than would ever be possible for any human, still they don't grasp basic concepts, while their knowledge is way bigger than for any human.
It's hard to explain, but intuitively it seems to me the missing factor is consciousness. It has learned tons of information by heart, but it doesn't really understand any of it, because it isn't conscious.
Being conscious is not just to know what the words mean, but to understand what they mean.
I think therefore I am.
I don’t see why the example requiring training for humans to understand is unfortunate.
Humans aren't innately good at math. I wouldn't have been able to prove the statement without looking things up. I certainly would not be able to come up with the Peano Axioms, or anything comparable, on my own. Most people, even educated people, probably wouldn't understand what there is to prove. Actually, I'm not sure if I do.
It's not clear why such deficiencies among humans do not argue against human consciousness.
A leading AI has way more training than would ever be possible for any human, still they don’t grasp basic concepts, while their knowledge is way bigger than for any human.
That's dubious. LLMs are trained on more text than a human ever sees, but humans are trained on data from several senses. I guess it's not entirely clear how much data that is, but it's a lot and very high quality. Humans are trained on that sense data and not on text. Humans read text and may learn from it.
Being conscious is not just to know what the words mean, but to understand what they mean.
What might an operational definition look like?
Regarding energy/water use:
ChatGPT uses 3 Wh. This is enough energy to: [...] Play a gaming console for 1 minute.
If you want to prompt ChatGPT 40 times, you can just stop your shower 1 second early. If you normally take a 5 minute shower, set a timer for 299 seconds instead, and you’ll have saved enough water to justify 40 ChatGPT prompts.
(Source: https://andymasley.substack.com/p/a-cheat-sheet-for-conversations-about)
I recall all the same arguments about how much energy and carbon are involved in performing one Google search. Does anyone care? Nope.
I’ve always ignored the energy issue on the assumption that it will be optimized away. Right now, leapfrogging the competition to new levels of functionality is what’s important. But when (if?) these tools settle into true mass usage, the eggheads will have every incentive to focus on optimization to save on operating costs. When that finally starts happening, we will know that AI has passed out of its era as a speculative bet and into prime time as an actual product.