this post was submitted on 03 Apr 2024
961 points (99.4% liked)
Technology
59656 readers
3044 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If you ever encountered an AI hallucinating stuff that just does not exist at all, you know how bad the idea of AI enhanced evidence actually is.
Everyone uses the word "hallucinate" when describing visual AI because it's normie-friendly and cool sounding, but the results are a product of math. Very complex math, yes, but computers aren't taking drugs and randomly pooping out images because computers can't do anything truly random.
You know what else uses math? Basically every image modification algorithm, including resizing. I wonder how this judge would feel about viewing a 720p video on a 4k courtroom TV because "hallucination" takes place in that case too.
There is a huge difference between interpolating pixels and inserting whole objects into pictures.
Both insert pixels that didn't exist before, so where do we draw the line of how much of that is acceptable?
Look it this way: If you have an unreadable licence plate because of low resolution, interpolating won't make it readable (as long as we didn't switch to a CSI universe). An AI, on the other hand, could just "invent" (I know, I know, normy speak in your eyes) a readable one.
You will draw yourself the line when you get your first ticket for speeding, when it wasn't your car.
License plates is an interesting case because with a known set of visual symbols (known fonts used by approved plate issuers) you can often accurately deblur even very very blurry text (but not with AI algorithms, but rather by modeling the blur of the cameras and the unique blur gradients this results in for each letter). It does require a certain minimum pixel resolution of the letters to guarantee unambiguity though.
Interesting example, because tickets issued by automated cameras aren't enforced in most places in the US. You can safely ignore those tickets and the police won't do anything about it because they know how faulty these systems are and most of the cameras are owned by private companies anyway.
"Readable" is a subjective matter of interpretation, so again, I'm confused on how exactly you're distinguishing good & pure fictional pixels from bad & evil fictional pixels
Being tickets enforced or not doesn't change my argumentation nor invalidates it.
You are acting stubborn and childish. Everything there was to say has been said. If you still think you are right, do it, as you are not able or willing to understand. Let me be clear: I think you are trolling and I'm not in any mood to participate in this anymore.
Sorry, it's just that I work in a field where making distinctions is based on math and/or logic, while you're making a distinction between AI- and non-AI-based image interpolation based on opinion and subjective observation
Okay, I'm not disagreeing with you about the fact that its all math.
However, interpolation or pixels is simple math. AI generated is complex math and is only as good as its training data.
The licence example is a good one. In interpolation, it'll just find some average, midpoint, etc and fill the pixel. In AI gen, if the training set had your number plate 999 times in a set of 1000, it will generate your numberplate no matter whose plate you input. to use it as evidence would need it to be far more deterministic than the probabilistic nature of AI gen content.
Wait what? No.
It's entirely possible if you ignore the ticket, a human might review it and find there's insufficient evidence. But if, for example, you ran a red light and they have a photo that shows your number plate and your face... then you don't want to ignore that ticket. And they generally take multiple photos, so even if the one you received on the ticket doesn't identify you, that doesn't mean you're safe.
When automated infringement systems were brand new the cameras were low quality / poorly installed / didn't gather evidence necessary to win a court challenge... getting tickets overturned was so easy they didn't even bother taking it to court. But it's not that easy now, they have picked up their game and are continuing to improve the technology.
Also - if you claim someone else was driving your car, and then they prove in court that you were driving... congratulations, your slap on the wrist fine is now a much more serious matter.
I mean we "invent" pixels anyway for pretty much all digital photography based on Bayer filters.
But the answer is linear interpolation. That's where we draw the line. We have to be able to point to a line of code and say where the data came from, rather than a giant blob of image data that could contain anything.
What’s your bank account information? I’m either going to add or subtract a lot of money from it. Both alter your account balance so you should be fine with either right?
Has this argument ever worked on anyone who has ever touched a digital camera? “Resizing video is just like running it through AI to invent details that didn’t exist in the original image”?
“It uses math” isn’t the complaint and I’m pretty sure you know that.
Whenever people say things like this, I wonder why that person thinks they're so much better than everyone else.
Tangentially related: the more people seem to support AI all the things the less it turns out they understand it.
I work in the field. I had to explain to a CIO that his beloved “ChatPPT” was just autocomplete. He become enraged. We implemented a 2015 chatbot instead, he got his bonus.
We have reached the winter of my discontent. Modern life is rubbish.
Normie, layman... as you've pointed out, it's difficult to use these words without sounding condescending (which I didn't mean to be). The media using words like "hallucinate" to describe linear algebra is necessary because most people just don't know enough math to understand the fundamentals of deep learning - which is completely fine, people can't know everything and everyone has their own specialties. But any time you simplify science so that it can be digestible by the masses, you lose critical information in the process, which can sometimes be harmfully misleading.
Or sometimes the colloquial term people have picked up is a simplified tool for getting the right point across.
Just because it's guessing using math doesn't mean it isn't hallucinating in a sense the additional data. It did not exist before and it willed it into existence much like a hallucination while being easy for people to catch onto quickly as not trustworthy thanks to previous definitions and understanding of the word.
Part of language is finding the right words to use so that people can quickly understand topics even if it means giving up nuance but absolutely it should be based on getting them to the right conclusion even if in a simplified form which doesn't always happen when there is bias. I think this one works just fine.
It’s not just the media who uses this term. According to this study which I’ve had a very brief skim of, the term “hallucination” was used in literature as early as 2000, and in Table 1, you can see hundreds of studies from various databases which they then go on to analyse the use of “hallucination” in.
It’s worth saying that this study is focused on showing how vague the term is, and how many different and conflicting definitions of “hallucination” there are in the literature, so I for sure agree it’s a confusing term. Just it is used by researchers as well as laypeople.
LLMs (the models that “hallucinate” is most often used in conjunction with) are not Deep Learning normie.
https://en.m.wikipedia.org/wiki/Large_language_model
https://en.m.wikipedia.org/wiki/Neural_network_(machine_learning)
I’m not going to bother arguing with you but for anyone reading this: the poster above is making a bad faith semantic argument.
In the strictest technical terms AI, ML and Deep Learning are district, and they have specific applications.
This insufferable asshat is arguing that since they all use fuel, fire and air they are all engines. Which’s isn’t wrong but it’s also not the argument we are having.
@OP good day.
When you want to cite sources like me instead of making personal attacks, I’ll be here 🙂
I said good day.
Ok but before you go, just want to make sure you know that this statement of yours is incorrect:
Actually, they are not the distinct, mutually exclusive fields you claim they are. ML is a subset of AI, and Deep Learning is a subset of ML. AI is a very broad term for programs that emulate human perception and learning. As you can see in the last intro paragraph of the AI wikipedia page (whoa, another source! aren't these cool?), some examples of AI tools are listed:
Some of these - mathematical optimization, formal logic, statistics, and artificial neural networks - comprise the field known as machine learning. If you'll remember from my earlier citation about artificial neural networks, "deep learning" is when artificial neural networks have more than one hidden layer. Thus, DL is a subset of ML is a subset of AI (wow, sources are even cooler when there's multiple of them that you can logically chain together! knowledge is fun).
Anyways, good day :)
Sure, no drugs involved, but they are running a statistically proven random number generator and using that (along with non-random data) to generate the image.
The result is this - ask for the same image, get two different images — similar, but clearly not the same person - sisters or cousins perhaps... but nowhere near usable as evidence in court:
Tell me you don't know shit about AI without telling me you don't know shit. You can easily reproduce the exact same image by defining the starting seed and constraining the network to a specific sequence of operations.
But if you don't do that then the ML engine doesn't have the introspective capability to realize it failed to recreate an image
And if you take your eyes off of their sockets you can no longer see. That's a meaningless statement.
The point is that the AI 'enhanced' photos have nice clear details that are randomly produced, and thus should not be relied on. Are you suggesting that we can work around that problem by choosing a random seed manually? Do you think that solves the problem?
It’s not AI, it’s PISS. Plagiarized information synthesis software.
Just like us!
Technically incorrect - computers can be supplied with sources of entropy, so while it's true that they will produce the same output given identical inputs, it is in practice quite possible to ensure that they do not receive identical inputs if you don't want them to.
IIRC there was a random number generator website where the machine was hookup up to a potato or some shit.
Bud, hallucinate is a perfect term for the shit AI creates because it doesnt understand reality, regardless if math is creating that hallucination or not