this post was submitted on 22 Feb 2024
75 points (96.3% liked)

AI

4006 readers
1 users here now

Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen.

founded 3 years ago
top 12 comments
sorted by: hot top controversial new old
[–] will_a113@lemmy.ml 25 points 7 months ago (1 children)

I read the article yesterday and have been exposed to it a dozen times since, but I still laugh every time I see the phrase “racially diverse nazis”

[–] MacNCheezus 1 points 7 months ago* (last edited 7 months ago)

What’s funny about this is that it IS, in fact, at least somewhat historically accurate: https://en.wikipedia.org/wiki/Free_Arabian_Legion

While they did wear separate uniforms from the Nazis, there were in fact both Blacks and Arabs fighting on the side of the Nazi regime in WW2. Asians too, of course, since Japan was allied with Germany.

[–] PullUpCircuit@iusearchlinux.fyi 13 points 7 months ago (2 children)

Pretty sure these tools are often seeded with prompts that enforce diversity. Bing does the same or similar. I'm more amused by this, as the process isn't aware and can't actively enable or disable these settings.

To actively fit a historical prompt, it would need to not only consider images from the period, but also properly synthesize historical data to go with the prompt.

[–] zifnab25@hexbear.net 6 points 7 months ago (1 children)

That would require some kind of machine capable of learning, a model of language so incredibly large that it can comprehend these linguistic nuances, or an intelligent form of artificial device.

Wonder if we'll ever have something like that in the future.

[–] Aquilae@hexbear.net 4 points 7 months ago (1 children)

I mean, we ourselves are just electronic meat machines (with millions of years worth of fine-tuning).

I'm sure that'll happen at some point in the future, if we manage to not destroy ourselves and/or the planet by then.

[–] zifnab25@hexbear.net 2 points 7 months ago

There's a Sci-fi horror book I enjoyed, called "John Dies At The End", that posits an alternative history in which computers were created from the brains of pigs.

As a consequence, the civilization is heavily invested in harvesting organs in the same way that we're invested in drilling for oil.

[–] MacNCheezus 3 points 7 months ago* (last edited 7 months ago) (1 children)

Yes, I saw some talk and a screenshot somewhere that showed that apparently in its current state, Gemini can (or could) be asked to output the prompt enhancements it used along with the generated images.

The screenshot showed someone asking for images of fruit, and the enhanced prompt included "racially diverse groups of people". Now if they're inserting something like that even for images containing no people at, it stands to reason that this is just a default enhancement they ALWAYS apply, no matter the prompt, which would explain the racially diverse Nazis (and all the other brouhahahas we've seen from them).

[–] PullUpCircuit@iusearchlinux.fyi 2 points 7 months ago

That's really what I'm expecting. My guess is that the training data is skewed, and the prompt cannot adjust.

Either the machine will need to understand what is expected, or the company will need to address this and allow people to enable or disable diversity.

The first option may be impossible to attain at this stage. The second can lead to inappropriate images.

[–] gitgud@lemmy.ml 7 points 7 months ago

Racially diverse nazis

That tracks for google

[–] fartsparkles@sh.itjust.works 5 points 7 months ago

I feel some variant of Conway’s Law comes to play with AI and these biases in training sets and that there’ll be no way to address it without first addressing the biases in society.

[–] autotldr@lemmings.world 2 points 7 months ago

This is the best summary I could come up with:


Google has apologized for what it describes as “inaccuracies in some historical image generation depictions” with its Gemini AI tool, saying its attempts at creating a “wide range” of results missed the mark.

The statement follows criticism that it depicted specific white figures (like the US Founding Fathers) or groups like Nazi-era German soldiers as people of color, possibly as an overcorrection to long-standing racial bias problems in AI.

Over the past few days, however, social media posts have questioned whether it fails to produce historically accurate results in an attempt at racial and gender diversity.

The criticism was taken up by right-wing accounts that requested images of historical groups or figures like the Founding Fathers and purportedly got overwhelmingly non-white AI-generated people as results.

Image generators are trained on large corpuses of pictures and written captions to produce the “best” fit for a given prompt, which means they’re often prone to amplifying stereotypes.

“The stupid move here is Gemini isn’t doing it in a nuanced way.” And while entirely white-dominated results for something like “a 1943 German soldier” would make historical sense, that’s much less true for prompts like “an American woman,” where the question is how to represent a diverse real-life group in a small batch of made-up portraits.


The original article contains 766 words, the summary contains 211 words. Saved 72%. I'm a bot and I'm open source!

[–] Quastamaza@lemmy.ml 1 points 7 months ago

What a bunch of ever-identical bullshit...