this post was submitted on 21 Nov 2024
208 points (100.0% liked)

TechTakes

1480 readers
181 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
top 24 comments
sorted by: hot top controversial new old
[–] Catoblepas@lemmy.blahaj.zone 45 points 3 weeks ago

[feeds AI garbage data filled with racial bias] How did this AI become racist??

[–] Gullible@sh.itjust.works 36 points 3 weeks ago (1 children)

Literally everyone knows that AI is racist. They did this while fully understanding and accepting the implications of their actions. Surely that should count toward a higher fine

[–] RamblingPanda@lemmynsfw.com 15 points 3 weeks ago (1 children)

I doubt it. My boss has a massive AI boner and he doesn't know shit about anything AI. I really don't think he's aware of this or other issues.

[–] SolacefromSilence@fedia.io 7 points 3 weeks ago (1 children)

Your boss is a cop, right? They are the only ones where ignorance of the law is an excuse not to follow it.

[–] RamblingPanda@lemmynsfw.com 2 points 3 weeks ago (1 children)

My friend, not every AI rates people for their skin tone.

[–] Gullible@sh.itjust.works 8 points 3 weeks ago (1 children)

Pretty sure every single one since Tay has been fairly racist. They get around that fact in many modern AIs by forcing subject changes but it’s still fairly racist on the backend. Early “jailbreaking” proved as much.

[–] RamblingPanda@lemmynsfw.com 4 points 3 weeks ago (1 children)

My coding AI is bullshit and almost every Kotlin, SQL, Java or whatever snippet is a hallucination, but I'm pretty sure it's not racist.

Not all AIs are made for the topic here. But all suffer from the same issues. Shit in, shit out.

I mean, it's operating in a domain where racism doesn't come up nearly as often, but I'm pretty sure if you managed to get an appropriate edge case it would end up at least as racist as the average StackOverflow user, which is to say more racist than we should be comfortable accepting uncritically.

[–] wizardbeard@lemmy.dbzer0.com 34 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

Just another scheme to abstract away responsibility for systemic biases/oppression. Now you don't even have to take responsibility for your (potentially small) part in the end result, because you can just blame it on the computer!

Whatever happened to this 1979 IBM presentation?

I take it to mean that the people pushing requirements for a program/system, the people implementing it, and the people utilizing it all hold a certain measure of responsibility for what is done with it.


My "most dangerous" creation is automation for certain employee onboarding and separation tasks, relating to their network sign in account and email. It is entirely triggered by data from our HR/payroll system, making the HR and each person's manager responsible for the input and results.

The only part that "makes decisions" that might differ from the input is the "sanitizing" of names for email address generation. I use a .Net library built into Windows to "normalize" accented characters like ö.

I'm comfortable leaning on Microsoft's "normalization" procedure. Better than rolling my own for an edge case that is rare at my workplace. End-users can assign their own "preferred name" in the HR system, and HR can modify the "legal name" values as needed. Lastly, anyone can request their email address to be changed to just about whatever they want at our helpdesk's discretion.

Employee separation processing happens at end of day, with a manual process that HR can kick off for "high risk" ones that must be immediate (like security escorted someone out). There is a metric shit ton of planning put into the aging off of different parts of the employee data, with considerations for short contract renewal gaps, ability to pull critical data from separated accounts within a reasonable timeframe (and with appropriate approvals), and various legal requirements we're beholden to.

Fun fact for separation calls through Teams from HR to remote workers: a user who has been disabled in AD and Entra, who has had their Azure/Entra/365 access tokens revoked, and whose password has been changed... can continue an active video/voice call in teams that started before they were "locked out". They can't access or create any other text chats, file sharing, or even the text chat for the existing call though. Makes it easy for HR to have access removed during the call. The user will still have access to any local content on their machine for as long as they stay logged in, but it's at least something (especially combined with a default block on USB storage to help prevent data exfiltration). Please test yourself before relying on this though, as Microsoft changes shit with Azure daily.


Anyway... the point is, this is shit I thought out, along with a ton of other considerations. I know for certain how my code reacts to things, and I am comfortable with all the potential outcomes even when things go wrong.

When shit goes wrong with it, I openly and clearly point out where it's an issue with input data from HR. When it is a problem with the code, I openly and clearly take responsibility as I wrote the code. I never go "it's the computer's fault". Even if it is some problem with Microsoft's code I hook into, it is my fault for not fully understanding what I told the code to do.

I truly have a hard time comprehending just how comfortable all these people are with the black box nature of AI, especially for such important things like fucking tenant screening.

[–] dgerard@awful.systems 17 points 3 weeks ago

sorry that sign's been updated

[–] dgerard@awful.systems 12 points 3 weeks ago (2 children)

especially for such important things like fucking tenant screening.

you must understand: this is a feature

[–] Maggoty@lemmy.world 8 points 3 weeks ago* (last edited 3 weeks ago)

This is right up there with that one time^1 they used a computer to collude on rent prices.

1 - It wasn't actually just one time. It was actually a lot of times, each day, for years.

[–] wizardbeard@lemmy.dbzer0.com 5 points 3 weeks ago

I think I'm just in denial that absolutely everyone involved with the creation and use would want this outcome, or be too dense as to not see this result coming from a mile away.

[–] Laser@feddit.org 6 points 3 weeks ago

Today it seems as if management can never be held accountable either, so the point is moot.

[–] swlabr@awful.systems 20 points 3 weeks ago (2 children)

Filing this away in my “Mao was right, let’s ‘abolish’ the landlords” folder

[–] dgerard@awful.systems 10 points 3 weeks ago

Mao just read his Adam Smith

[–] TankovayaDiviziya@lemmy.world 3 points 3 weeks ago

A broken clock is right twice a day.

[–] KillerTofu@lemmy.world 19 points 3 weeks ago (1 children)
[–] JakenVeina@lemm.ee 28 points 3 weeks ago (1 children)

Good news is it wasn't JUST a $2.28 million fine, they're now banned from using this AI scoring system for 5 years.

Of course, it'd be far better if it was a permanent ban.

[–] adarza@lemmy.ca 21 points 3 weeks ago (1 children)

give their devs a couple hours they'll have a totally different--but not really that different, system, and business as usual will continue.

[–] Mirshe@lemmy.world 5 points 3 weeks ago

Yup. "We changed a few buttons and gave the UI a new look so it's totally a brand new product and not the same thing".

[–] peto@lemm.ee 17 points 3 weeks ago* (last edited 3 weeks ago)

"I learned it by watching you!" - The AI (probably)