Soyweiser

joined 2 years ago
[–] Soyweiser@awful.systems 1 points 16 minutes ago

Im reminded of the cartoon bullets from who framed rodger rabbit.

[–] Soyweiser@awful.systems 2 points 18 minutes ago

LLMs cannot fail, they can only be prompted incorrectly. (To be clear, as I know there will be people who think this is good, I mean this in a derogatory way)

[–] Soyweiser@awful.systems 2 points 42 minutes ago* (last edited 21 minutes ago)

Think this already happened, not this specific bit, but ai involved shooting. Esp considering we know a lot of black people have been falsely arrested due to facial ID already. And with the gestapofication of the USA that will just get worse. (Esp when the police go : no regulations on AI also gives us carte blance. No need for extra steps).

[–] Soyweiser@awful.systems 1 points 55 minutes ago* (last edited 45 minutes ago)

Remember those comments with links in them bots leave on dead websites? Imagine instead of links it sets up an AI to think of certain specific behaviour or people as immoral.

Swatting via distributed hit piece.

Or if you manage to figure out that people are using a LLM to do input sanitization/log reading, you could now figure out a way to get an instruction in the logs and trigger alarms this way.

Or just send an instruction which changes the way it tries to communicate, and have the LLM call not the cops but a number controlled by hackers which pays out to them, like the stories of the A2P sms fraud which Musk claimed was a problem on twitter.

Sure competent security engineering can prevent a lot of these attacks but you know points to history of computers.

Imagine if this system was implemented for Grok when it was doing the 'everything is white genocide' thing.

[–] Soyweiser@awful.systems 3 points 1 hour ago (1 children)

"whats my purpose?"

[–] Soyweiser@awful.systems 19 points 15 hours ago

Are you trying to say here that cold readers do not actually communicate with the spirit realm? Where is your open mind?

[–] Soyweiser@awful.systems 5 points 16 hours ago

Sociopaths

Bit important to note here to people not familiar with the blog posts (now available as a book (in pdf form), because everything must be monetized) that sociopath is meant here as a specific type of person, not a clinical sociopath per se, but more a certain type of person inside the context of the blog post series. So people reacting to it beware.

[–] Soyweiser@awful.systems 10 points 21 hours ago* (last edited 21 hours ago) (1 children)

Think you are misreading the blog post. They did this after the Grok had its white genocide hyperfocus thing. It shows the process of the xAI public github (their fix (??) for Groks hyperfocus) is bad, not that they started it. (There is also no reason to believe this github is actually what they are using directly (would be pretty foolish of them, which is why I could also believe they could be using it))

[–] Soyweiser@awful.systems 7 points 22 hours ago

Cryptocurrencyexecs after reading this : "we have a code red, a code red, the public has figured it out, abort abort!"

[–] Soyweiser@awful.systems 7 points 1 day ago* (last edited 22 hours ago) (1 children)

Doesnt help that there is a group of people who go 'using the poor like ~~biofuel~~ food what a good idea'.

E: Really influential movie btw. ;)

[–] Soyweiser@awful.systems 3 points 1 day ago

No, the guy who did it non-apologized, by going he should have checked the output better. Still an AI user.

[–] Soyweiser@awful.systems 9 points 1 day ago (1 children)

Look, im def on team Murderbot, but when ~~we~~ the AI's start building them I really hope Martha Wells gets some kickbacks at least.

11
submitted 1 week ago* (last edited 1 week ago) by Soyweiser@awful.systems to c/sneerclub@awful.systems
 

Begrudgingly Yeast (@begrudginglyyeast.bsky.social) on bsky informed me that I should read this short story called 'Death and the Gorgon' by Greg Egan as he has a good handle on the subjects/subjects we talk about. We have talked about Greg before on Reddit.

I was glad I did, so going to suggest that more people he do it. The only complaint you can have is that it gives no real 'steelman' airtime to the subjects/subjects it is being negative about. But well, he doesn't have to, he isn't the guardian. Anyway, not going to spoil it, best to just give it a read.

And if you are wondering, did the lesswrongers also read it? Of course: https://www.lesswrong.com/posts/hx5EkHFH5hGzngZDs/comment-on-death-and-the-gorgon (Warning, spoilers for the story)

(Note im not sure this pdf was intended to be public, I did find it on google, but might not be meant to be accessible this way).

 

The interview itself

Got the interview via Dr. Émile P. Torres on twitter

Somebody else sneered: 'Makings of some fantastic sitcom skits here.

"No, I can't wash the skidmarks out of my knickers, love. I'm too busy getting some incredibly high EV worrying done about the Basilisk. Can't you wash them?"

https://mathbabe.org/2024/03/16/an-interview-with-someone-who-left-effective-altruism/

 

Some light sneerclub content in these dark times.

Eliezer complements Musk on the creation of community notes. (A project which predates the takeover of twitter by a couple of years (see the join date: https://twitter.com/CommunityNotes )).

In reaction Musk admits he never read HPMOR and he suggests a watered down Turing test involving HPMOR.

Eliezer invents HPMOR wireheads in reaction to this.

view more: next ›