this post was submitted on 15 Jul 2023
69 points (100.0% liked)

Games

16758 readers
758 users here now

Video game news oriented community. No NanoUFO is not a bot :)

Posts.

  1. News oriented content (general reviews, previews or retrospectives allowed).
  2. Broad discussion posts (preferably not only about a specific game).
  3. No humor/memes etc..
  4. No affiliate links
  5. No advertising.
  6. No clickbait, editorialized, sensational titles. State the game in question in the title. No all caps.
  7. No self promotion.
  8. No duplicate posts, newer post will be deleted unless there is more discussion in one of the posts.
  9. No politics.

Comments.

  1. No personal attacks.
  2. Obey instance rules.
  3. No low effort comments(one or two words, emoji etc..)
  4. Please use spoiler tags for spoilers.

My goal is just to have a community where people can go and see what new game news is out for the day and comment on it.

Other communities:

Beehaw.org gaming

Lemmy.ml gaming

lemmy.ca pcgaming

founded 1 year ago
MODERATORS
top 12 comments
sorted by: hot top controversial new old
[–] fogetaboutit@programming.dev 24 points 1 year ago (1 children)

Why the fuck does authoritarian+capitalistic people have always painted the future as a dystopian working machine, blamed the system, and refuses to change?

[–] LostCause@kbin.social 2 points 1 year ago* (last edited 1 year ago)

I usually hate LinkedIn, but I once found this little wheel of control and abuse there:

https://www.linkedin.com/pulse/workplace-abuse-power-control-jo-banks

Read through this with the mindset of being a powerful minority in charge of a powerless majority, who need to be kept under control and producing for your benefit. Then a lot of this makes sense.

[–] Zeth0s@lemmy.world 18 points 1 year ago* (last edited 1 year ago)

It is not a good idea. Knowing that a conversation is monitored introduces a bias that makes your model practically useless. I don't even know how can they measure performance decently. There is a reason double blind studies exist.

I know exactly what is going on here. A business school type of exec came up with a completely stupid idea. He did a great power point presentation to the board, decision was taken up bottom, a data science team agreed because "I am paid well above average salary, who give a f", they deliver something with a dashboard for monkey execs.

Unfortunately it is so common in data science... After few years the exec will find out the model is useless garbage, will blame data science team, will do some "restructuring" and will go around exec conferences saying that "AI" doesn't work. Meanwhile data scientist who implemented the original model already left for a better company

Edit. I checked, the guy is from a business school and was talking about an idea he has... They are so predictable...

[–] wsippel@kbin.social 11 points 1 year ago (2 children)

The idea is to monitor internal communications and do sentiment analysis to check if developers are toxic, too stressed or burned out. While the tech in general could of course be abused, the general idea sounds pretty good, as long as the AI is on-prem for privacy reasons and the employer is transparent and honest about it. Making sure employees are healthy, happy and productive sounds like a worthwhile goal. I wouldn't want a human therapist monitoring communications to look for negative signs, but the AI can screen stuff, focus exclusively on what it was told to, and forget everything on command.

[–] addie@feddit.uk 11 points 1 year ago (2 children)

I'd have to disagree with that. If you don't have enough trust in your managers to talk to them directly about toxicity, stress, and overload, then how on earth would you trust them to monitor all of your communications to determine the same? I suspect that the actual result would be that all employees would be sure to only discuss sensitive matters in-person or through some non-monitored channel, while looking for another job elsewhere. Also, call me cynical, but I've seen enough leadership decisions that are 'we've asked for all these powers, but don't worry, we promise not to abuse them!' that they did, in fact, turn out to abuse.

And after reading all the stories about AI's copyright-infringing ways, slurping up decades of Twitter and Reddit comments, you'd trust the authors to 'keep it on site' and 'forget everything on demand'?

[–] wsippel@kbin.social 4 points 1 year ago (1 children)

AIs don't judge, don't remember and don't hold anything against me, so I'd rather have an AI screening my stuff than a human - especially my superiors.

And yes, I trust an AI I run myself. I know they don't phone home (because they literally can't) and don't remember anything unless I go through the effort to connect something like a Chroma or Weaviate vector database, which I then also host and manage myself. The beauty of open source. I would certainly never accept using GPT-4 or Bard or some other 3rd party cloud solution for something this sensitive.

[–] moon_matter@kbin.social 3 points 1 year ago* (last edited 1 year ago)

AIs don't judge, don't remember and don't hold anything against me, so I'd rather have an AI screening my stuff than a human - especially my superiors.

They do judge, in the sense that managers are going to want statistics and those stats are going be interpreted a certain way. It's a "numbers don't lie or show bias, but anything lower than a 7/10 is bad according to humans" situation.

[–] Eezyville@sh.itjust.works 3 points 1 year ago

All your points are valid but you forgot to mention the bias that AI may have. People seem to think that AI is unbiased because it's a computer but no one thinks about who made and trained that AI. How does it change over time with more input from people? How do you code morality and empathy? How do you account for changing social norms or unrest? How would AI react to people affected by the George Floyd protest of even the War in Ukraine? You can try and train the AI on a company's culture but every employee has their own life, problems, and history that the AI can't account for.

People tend to forget that time Microsoft put some AI on Twitter but had to quickly take it back down.

[–] moon_matter@kbin.social 5 points 1 year ago

You have to account for the fact that it's going to be abused. If I knew I was monitored like this then it would change how I interacted with people. I wouldn't go as far as outright faking positivity, but I would definitely avoid being too negative. Everything would have to go through my "corporate drone" speech filter. It would remove my ability to be frank with people beyond a certain point.

[–] LeylaaLovee@lemmy.fmhy.ml 7 points 1 year ago

Haven't played Hello Neighbor, but I assume that the neighbor is just their self insert character for the leads.

[–] ScreaminOctopus@sh.itjust.works 7 points 1 year ago (1 children)

Do people actually believe anything they do on company computers is private? I honestly assume that someone is at least skimming my work messages already

[–] MomoTimeToDie@sh.itjust.works 1 points 1 year ago

Do people actually believe anything they do on company computers is private?

A very narrow set of people stupid enough to not realize work systems are managed by their workplace, but not stupid enough to have gotten punished for doing stupid shit on work systems yet

load more comments
view more: next ›