this post was submitted on 20 Nov 2023
362 points (88.3% liked)

Asklemmy

43905 readers
959 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

Money wins, every time. They're not concerned with accidentally destroying humanity with an out-of-control and dangerous AI who has decided "humans are the problem." (I mean, that's a little sci-fi anyway, an AGI couldn't "infect" the entire internet as it currently exists.)

However, it's very clear that the OpenAI board was correct about Sam Altman, with how quickly him and many employees bailed to join Microsoft directly. If he was so concerned with safeguarding AGI, why not spin up a new non-profit.

Oh, right, because that was just Public Relations horseshit to get his company a head-start in the AI space while fear-mongering about what is an unlikely doomsday scenario.


So, let's review:

  1. The fear-mongering about AGI was always just that. How could an intelligence that requires massive amounts of CPU, RAM, and database storage even concievably able to leave the confines of its own computing environment? It's not like it can "hop" onto a consumer computer with a fraction of the same CPU power and somehow still be able to compute at the same level. AI doesn't have a "body" and even if it did, it could only affect the world as much as a single body could. All these fears about rogue AGI are total misunderstandings of how computing works.

  2. Sam Altman went for fear mongering to temper expectations and to make others fear pursuing AGI themselves. He always knew his end-goal was profit, but like all good modern CEOs, they have to position themselves as somehow caring about humanity when it is clear they could give a living flying fuck about anyone but themselves and how much money they make.

  3. Sam Altman talks shit about Elon Musk and how he "wants to save the world, but only if he's the one who can save it." I mean, he's not wrong, but he's also projecting a lot here. He's exactly the fucking same, he claimed only he and his non-profit could "safeguard" AGI and here he's going to work for a private company because hot damn he never actually gave a shit about safeguarding AGI to begin with. He's a fucking shit slinging hypocrite of the highest order.

  4. Last, but certainly not least. Annie Altman, Sam Altman's younger, lesser-known sister, has held for a long time that she was sexually abused by her brother. All of these rich people are all Jeffrey Epstein levels of fucked up, which is probably part of why the Epstein investigation got shoved under the rug. You'd think a company like Microsoft would already know this or vet this. They do know, they don't care, and they'll only give a shit if the news ends up making a stink about it. That's how corporations work.

So do other Lemmings agree, or have other thoughts on this?


And one final point for the right-wing cranks: Not being able to make an LLM say fucked up racist things isn't the kind of safeguarding they were ever talking about with AGI, so please stop conflating "safeguarding AGI" with "preventing abusive racist assholes from abusing our service." They aren't safeguarding AGI when they prevent you from making GPT-4 spit out racial slurs or other horrible nonsense. They're safeguarding their service from loser ass chucklefucks like you.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] MudMan@kbin.social 5 points 1 year ago* (last edited 1 year ago) (2 children)

Those things do have impact. Sometimes very negative impact. I was very optimistic about early data processing when the first search engines popped up, and eventually a lot of the bad predictions happened. With social media, rather than search engines, but they did pan out. Didn't end the world. May have ended liberal democracy, though, give it a minute.

But the point is those were predictions based on the tech we actually had. Oh, we can access, index and serve all data on connected computers based on alogrithmic searches? That's messed up.

But at least some of the fearmongering here is based on tech that is not the tech that we made. It's qualitatively different.

And it's a problem, because some of the fearmongering is actually accurate and some of the fearmongering should have happened when Facebook and Google started doing facial recognition on billions of people based on implicit consent, or when they started using "dumb" algorithms to create individual profiles of those billions of people for commercial use. Or when every image we see in mass and social media started being heavily doctored by default through manual and automated means. But we only got scared about it when it roughly aligned with Terminator and War Games because we're really dumb, and now we're letting those same gross corporations use the fear to try and keep upcoming competitors (and particularly open source competitors) out of the market by endorsing legislation to get grandfathered into a heavily regulated business sector.

It's honestly depressing on every possible angle. I've said this before: we finally taught computers to speak like in Star Trek and we immediately made it the most frustrating, sad version of that possible and everybody is angry. For the wrong reasons.

We really suck sometimes.

[โ€“] SnotFlickerman@lemmy.blahaj.zone 2 points 1 year ago* (last edited 1 year ago) (1 children)

Lots of people suck, but you don't. I really like and appreciate everything you wrote here.

Humans and their computers:

It's regularly amazing how smart humans are and at the same time so frustratingly dumb.

[โ€“] MudMan@kbin.social 3 points 1 year ago

Oh, I suck as much as anybody. I'm terrible at parsing genuine praise, for instance.

But you're right about the last part. I mean, the guys that got out of the gate with this stuff first have been publicly imploding for the past three days, and they aren't even the dumbest people involved in this.

I'm terrible at parsing genuine praise

avarage male these days. Anyways I agree with you. All of these lies and everything are for money. Back in the day I remember people thsorysing about tech (in general, like cpus) being way batter then what's sold on the market for them to be able to make a 2nd generation. It was a theory wothout base, but you saw it happen with AI. First couple of weeks it was wonderful, then slowly got more and more restricted, slow and dumb. But the fact is that it's still groundbreaking tech, so people are impressed, and are using it. But can you imagine the un-jailed version for a select few privliged people?

The fact that all of this (the dumbing down, and restricting part) is for "Protecting the children." is infuriating. Going to a different website and clicking a highlighted option in a pop-up and you have all the gore, porn, vore, fetishes that you didn't even know existed. but swearing on the other website?? strictly prohibited.