wizardbeard

joined 2 years ago
[–] wizardbeard@lemmy.dbzer0.com 2 points 3 hours ago

Yeah, that kind of comment is going to fly over the head of a lot of stupid people.

[–] wizardbeard@lemmy.dbzer0.com 10 points 14 hours ago (2 children)

I'm not shedding any tears for the companies that failed to do their due dilligence in hiring, especially not ones involved in AI (seems most were) and involved with Y Combinator.

That said, unless you want to get into a critique of capitalism itself, or start getting into whataboutism regarding celebrity executives like a number of the HN comments do, I don't have many qualms calling this sort of thing unethical.

This whole thing is flying way too close to the "not debate club" rule for my comfort already, but I wrote it so I may as well post itMultiple jobs at a time, or not giving 100% for your full scheduled hours is an entirely different beast than playing some game of "I'm going to get hired at literally as many places as possible, lie to all of them, not do any actual work at all, and then see how long I can draw a paycheck while doing nothing".

Like, get that bag, but ew. It's a matter of intent and of scale.

I can't find anything indicating that the guy actually provided anything of value in exchange for the paychecks. Ostensibly, employment is meant to be a value exchange.

Most critically for me: I can't help but hurt some for all the people on teams screwed over by this. I've been in too many situations where even getting a single extra pair of hands on a team was a heroic feat. I've seen the kind of effects it has on a team tthat's trying not to drown when the extra bucket to bail out the water is instead just another hole drilled into the bottom of the boat. That sort of situation led directly to my own burnout, which I'm still not completely recovered from nearly half a decade later.

Call my opinion crab bucketing if you like, but we all live in this capitalist framework, and actions like this have human consequences, not just consequences on the CEO's yearly bonus.

[–] wizardbeard@lemmy.dbzer0.com 12 points 15 hours ago (6 children)

Get your popcorn folks. Who would win: one unethical developer juggling "employment trial periods", or the combined interview process of all Y Combinator startups?

https://news.ycombinator.com/item?id=44448461

Apparently one indian dude managed to crack the YC startup interview game and has been juggling being employed full time at multiple ones simultaneously for at least a year, getting fired from them as they slowly realize he isn't producing any code.

The cope from the hiring interviewers is so thick you could eat it as a dessert. "He was a top 1% in the interview" "He was a 10x". We didn't do anything wrong, he was just too good at interviewing and unethical. We got hit by a mastermind, we couldn't have possibly found what the public is finding quickly.

I don't have the time to dig into the threads on X, but even this ask HN thread about it is gold. I've got my entertainment for the evening.

Apparently he was open about being employed at multiple places on his linkedin. I'm seeing someone say in that HN thread that his resume openly lists him hopping between 12 companies in as many months. Apparently his Github is exclusively clearly automated commits/activity.

Someone needs to run with this one. Please. Great look for the Y Combinator ghouls.

[–] wizardbeard@lemmy.dbzer0.com 14 points 17 hours ago* (last edited 17 hours ago) (1 children)

Pretty sure they advertised the first season of that arc directly stating that it was a coma arc.

[–] wizardbeard@lemmy.dbzer0.com 3 points 17 hours ago* (last edited 17 hours ago)

Wow. The first 24 seconds of this 1 minute teaser are just clips of major spoilers of the ending of the first season. Remember all these characters that died? Let's watch their brains splatter out again. Enjoy the gore!

That's definitely a choice. I guess that's one way to keep people from expecting this to feature more content with the same characters from season 1. Still think it would have been better to title it something different than Edgerunners 2 considering it's not likely to tie in with that story.

I wonder when this is going to be set. Before 2077 makes the most sense. If it's after then they'd have to canonize at least some aspects of one of the game endings.

[–] wizardbeard@lemmy.dbzer0.com 24 points 17 hours ago (1 children)

Since when have rights holders ever been held responsible for the actions of online players?

The only instances I can think of are when games exclusively target minors, like Roblox.

And in what crazy world would those scant responsibilities carry over to community servers after official support was ended?

What a cop out.

[–] wizardbeard@lemmy.dbzer0.com 14 points 22 hours ago (1 children)

The problem with this idea is that the environmental impact is quantifiable. We can talk about it with hard numbers.

The things you've identified as a hard front line for us all to rally around have "easy" counters too. Plus there's very little "hard numbers" or "indisputable fact" behind them to rally around.

For the record, I agree with your points. I agree with these dangers. I agree that these should be easy points to rally around.

But so should the environmental impact, and it isn't. So you should probably expect that your personal "rally points" won't work for absolutely everyone else either.


So, the devil's advocate/steelmanning/whatever you want to call it. Here are some example counterpoints to what you seem to think is inarguable and strong enough to stand on its own. I don't agree with this shit, I'm not looking to debate club this shit, I'm just trying to demonstrate that these points are just as "assailable" as the ecological/environmental impact.

It's destroying art.

How in the hell do you even begin to quantify this? Also, the general counter that it expands the availability of expression through art to those without formal training (🤮, but it is a point they keep leaning on)

It's destroying Hollywood.

A lot of people would cheer for that. Hollywood's corporate bullshit and overwhelming impact on societal viewpoints has had horrible effects on the world for fucking decades. It is not some bastion of creative freedom and expression, and hasn't been for ages.

It's removing jobs from the workforce

Do we actually have numbers around that? I see it being used as an excuse plenty, but the economy was in the/headed for the shitter before the current AI fad. The suits are going to use whatever excuses are convenient for their already madr decisions anyway.

it's concentrating power and money.

See: almost every bit of progress forever, especially the last few decades. No ethical consumption under capitalism, etc etc.

it produces only soulless slop.

When we have people using it as therapy, clearly some population is able to connect with it in a way that they feel a personal (or soul-having) connection with (🤮). That's crazy as shit and terrifying, but again it is example that there is counterpoint to this fact. Also, arguments of meaning being found/made by the consumer of a piece rather than the creator.


Look, this is just a lot of words for me to say that I feel like your post against this particular bugbear could also be made about most of the points you feel are hard solid facts.

You make good points, but we should probably be steering away from more internal division.

[–] wizardbeard@lemmy.dbzer0.com 9 points 23 hours ago* (last edited 23 hours ago) (3 children)

Have any of the big companies released a real definition of what they mean by AGI? Because I think the meme potential of these leaked documents is being slept on.

The definition of AGI being achieved agreed on between Microsoft and OpenAI in 2023 is just: when OpenAI makes a product that raises $100B.

Seems like a fun way to shut down all the low quality philsophical wankery. Oh, AGI? You just mean $100B profit, right? That's what your lord and savior Altman means.

Maybe even something like a cloud to butt browser extension? AGI -> $100B in OpenAI profits

"What $100B in OpenAI Profits Means for the Future of Humanity"

I'm sure someone can come up with something better, but I think there's some potential here.

Especially because this is just fucking punitive to them. They have enough money that this wouldn't materially impact them.

There's a setting in the user profile to block/not block stuff tagged nsfw

59
Uphill, both ways! (lemmy.dbzer0.com)
submitted 1 week ago* (last edited 1 week ago) by wizardbeard@lemmy.dbzer0.com to c/reactionmemes@lemmy.dbzer0.com
 

Cropped from [EastCoastitNotes], shared by @stamets@lemmy.world in this post: https://lemmy.world/post/31818124

30
submitted 4 weeks ago* (last edited 4 weeks ago) by wizardbeard@lemmy.dbzer0.com to c/parenting@lemmy.world
 

My daughter is a little over two, and through well meaning family and friends we have more toys than we know what to do with.

My wife keeps buying what are essentially (fancy looking) big boxes and just dumping everything in them. Love my wife, but that's not working, it's just hiding some of the mess in a box.

We end up with these hardly ever opened boxes full of unorganized piles of toys that we end up having to dig through to find anything specific, and the toys that my daughter is actively using just end up scattered around the floor so they don't disappear into the box dimension.

Every once in a while my daughter opens and digs through the boxes and dumps half the contents on the floor anyway (not like she can see specific things to grab what she wants) and then we just kind of arbitrarily choose some of it to put back in the box and a new combination of mess to leave out.

Unfortunately we have another baby on the way, so I'm probably not getting my wife to let us toss any of it right now.

I'm leaning towards cubby shelves with individual bins for different "types" of toys like her daycare does, but I wanted to hear what strategies other parents tried, and what has and hasn't worked.

 

This blog post has been reported on and distorted by a lot of tech news sites using it to wax delusional about AI's future role in vulnerability detection.

But they all gloss over the critical bit: in fairly ideal circumstances where the AI was being directed to the vuln, it had only an 8% success rate, and a whopping 28% false positive rate!

 
 

Machine autotranslation of a french comic from https://lemm.ee/post/64691257

 

Cross post of https://thelemmy.club/post/27042027

AAAARRRRROOOOOOOOOOO

 

Came like this, they absolutely knew:

7
submitted 2 months ago* (last edited 2 months ago) by wizardbeard@lemmy.dbzer0.com to c/music@lemmy.world
view more: next ›