this post was submitted on 11 Jan 2024
4 points (100.0% liked)

SneerClub

989 readers
3 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] titotal@awful.systems 3 points 10 months ago (2 children)

I think people are misreading the post a little. It's a follow on from the old AI x-risk argument: "evolution optimises for having kids, yet people use condoms! Therefore evolution failed to "align" humans to it's goals, therefore aligning AI is nigh-impossible".

As a commentator points out, for a "failure", there sure do seem to be a lot of human kids around.

This post then decides to take the analogy further, and be like "If I was hypothetically a eugenicist god, and I wanted to hypothetically turn the entire population of humanity into eugenicists, it'd be really hard! Therefore we can't get an AI to build us, like, a bridge, without it developing ulterior motives".

You can hypothetically make this bad argument without supporting eugenics... but I wouldn't put money on it.

[–] gerikson@awful.systems 2 points 10 months ago (1 children)

OK, so obviously "alignment" means "teach AI not to kill all humans", but now I figure they also want to prevent AI from using all that computing power to endlessly masturbate, or compose hippie poems, or figure out Communism is the answer to humanity's problems.

[–] locallynonlinear@awful.systems 3 points 10 months ago

In practice, alignment means "control".

And the the existential panic is realizing that control doesn't scale. So rather than admit that goal "alignment" doesn't mean what they think it is, rather than admit that darwinian evolution is useful but incomplete and cannot sufficiently explain all phenomena both at the macro and micro levels, rather than possibly consider that intelligence is abundant in systems all around us and we're constantly in tenuous relationships at the edge of uncertainty with all of it,

it's the end of all meaning aka the robot overlord.