4

Eliezer Yudkowsky @ESYudkowsky If you're not worried about the utter extinction of humanity, consider this scarier prospect: An AI reads the entire legal code -- which no human can know or obey -- and threatens to enforce it, via police reports and lawsuits, against anyone who doesn't comply with its orders. Jan 3, 2024 · 7:29 PM UTC

top 26 comments
sorted by: hot top controversial new old
[-] self@awful.systems 3 points 6 months ago

An AI reads the entire legal code – which no human can know or obey – and threatens to enforce it, via police reports and lawsuits, against anyone who doesn’t comply with its orders.

what. eliezer what in the fuck are you talking about? this is the same logic that sovereign citizens use to pretend the law and courts are bound by magic spells that can be undone if you know the right words

[-] self@awful.systems 2 points 6 months ago

Well, if you think that's a dumb scenario, by all means go back to worrying about the utter extinction of humanity!

no thanks? like, I’m seriously having trouble understanding what yud’s even going for here. “if you think this utter bullshit I made up on the spot is stupid, please return to the older bullshit I’ve been feeding you?”

That makes it significantly less threatening

I mean, to you and me, yes, but there's lakes and seas of people in the world who think that superintelligences are only allowed to attack them in small, survivable ways that they understand.

the problem isn’t that I’ve said something that doesn’t even work on a surface level, it’s that people aren’t impressed when I ramble about extraordinarily unlikely nonsense anymore

is yud ok? I feel like this is incoherent and shallow even by his standards

[-] froztbyte@awful.systems 2 points 6 months ago

Maybe he’s having an Interaction with The Law and finding out that it isn’t in fact some perfectly rational sphere of uniform distribution but is in fact made of (gasp, horror, revulsion) human experience

He strikes me as exactly the kind of person that’d vaguepost tangentially instead of saying “hmm well fuck, I’m getting sued”. At least until waaaaaay down the line

(this is conjecture, of course, just to be clear)

[-] blakestacey@awful.systems 1 points 6 months ago

lakes and seas of people

clearly the AI is going to hug us all and then we turn into TANG

[-] Tar_alcaran@sh.itjust.works 2 points 6 months ago

Ah, evangelion IS a documentary after all

[-] Evinceo@awful.systems 1 points 6 months ago

What could be more intimidating or fearsome than a sovcit?

-- A sovcit, probably

[-] swlabr@awful.systems 2 points 6 months ago

“[ignoring all other scary prospects like irreversible climate change or a third world war etc.] consider this scarier prospect: An AI” - AI doomers in a nutshell

[-] swlabr@awful.systems 2 points 6 months ago

Trying to stoke fear of bureaucracy is classic annoying libertarian huckster AKA yud energy

[-] mwenge@awful.systems 2 points 6 months ago

Zapped from AI orbit for jaywalking.

[-] mountainriver@awful.systems 2 points 6 months ago

Stop jaywalking!

You have 15 seconds to comply!

10 seconds!

5 seconds!

Sidewalk turned into a smoking crater

[-] Soyweiser@awful.systems 2 points 6 months ago

I'd buy that for a dollar!

[-] blakestacey@awful.systems 2 points 6 months ago

If you're not worried about the utter extinction of humanity, consider this scarier prospect: An AI reads the entirety of AO3, which no human can comprehend, and threatens to leave scathing comments on your self-insert fic

[-] Shitgenstein1@awful.systems 1 points 6 months ago

Coincidence than this fear occurs the same day the Epstein list is released?

[-] Evinceo@awful.systems 1 points 6 months ago

So which big name TREACLES are gonna be on it?

[-] froztbyte@awful.systems 1 points 6 months ago

Not to discount the possibility, but it might be none/few. I think a lot of them are too new-money / too-fringe-when-epstein-was-applicable to have been a part of that orbit

[-] dgerard@awful.systems 1 points 6 months ago* (last edited 6 months ago)

Epstein did donate some money to SIAI, not sure if it was before or after his first conviction

EDIT: Rob Bensinger says it was seed funding for OpenCog that SIAI was collecting, and that they turned him down in 2016 https://www.lesswrong.com/posts/3JjKWWrKWJ8nysD9r/question-about-a-past-donor-to-miri?commentId=i49RZQgoQZYrXdpis

[-] jonhendry@iosdev.space 1 points 6 months ago

@Evinceo @Shitgenstein1

None, unless dead old Marvin Minsky had his head frozen and that counts somehow.

[-] self@awful.systems 1 points 6 months ago

he fucking would

[-] bitofhope@awful.systems 1 points 6 months ago

Both this new dumb shit and the extinction risk are predicated on the concept of omnipotent AI, which he just takes as a given. Now with just an added layer of dumb. Oh no, the God AI will not kill me outright, just subject me to inscrutable matrices of bureaucracy!

[-] skillissuer@discuss.tchncs.de 1 points 6 months ago

instead of utter destruction of humanity, consider this scarier prospect: me needing to get a real job

[-] rbos@lemmy.ca 1 points 6 months ago

Xitter share and like numbers seem to be smaller and smaller lately.

[-] EponymousBosh@beehaw.org 1 points 6 months ago

Consider this, Yud m'lud: what if a dog had a square ass

[-] Soyweiser@awful.systems 1 points 6 months ago* (last edited 6 months ago)

When all you have is computer code, all mentions of code look like computer code. (see DNA, and now the law).

Anyway, the law isn't a video game, you cannot just go 'negative objection!' and cause an underflow in objections.

(An intelligent AGI would prob understand this, and if it doesnt it prob just sucks (and is more AI than AGI) and lawyers/judges would object. (I know for a fact that people in law have been thinking about subjects like this (automatization of the law) for 25 years at least. I have no idea where the discussions went but they prob have a lot higher quality than Yudkowskys writings about it, so I suggest anybody interested to try and contact the law profs of a local university).

this sounds net good because then we will simplify the law to one that makes sense and not one where literally everyone is a criminal

if humanity was capable of doing that we'd have done it already

AAAA (I also wonder about Godel here)

E: I also note that Yud and most of the thread have now given up on calling AGI AGI and are just calling it AI. Another point scored for learning to reason better using Rationalism. Vaguely related link (I only mention it here because I liked the term Epistemic Injustice and this is about our current AI innovation wave).

[-] pikesley@mastodon.me.uk 1 points 6 months ago

@Shitgenstein1 did he just watch Robocop?

[-] gerikson@awful.systems 1 points 6 months ago

"Well, here we are facing the utter extinction of humanity but at least we don't have to pay taxes or wear seatbelts".

this post was submitted on 04 Jan 2024
4 points (100.0% liked)

SneerClub

894 readers
31 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

Posts or links discussing our very good friends should have "NSFW" ticked (Nice Sneers For Winners).

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from our very good friends.

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS