HedyL

joined 2 years ago
[–] HedyL@awful.systems 5 points 13 hours ago (1 children)

As I've pointed out earlier in this thread, it is probably fairly easy to manipulate and control people if someone is devoid of empathy and a conscience. Most scammers and cult leaders appear to operate from similar playbooks, and it is easy to imagine how these techniques could be incorporated into an LLM (either intentionally or even unintentionally, as the training data is probably full of examples). Doesn't mean that the LLM is in any way sentient, though. However, this does not imply that there is no danger. At risk are, on the one hand, psychologically vulnerable people and, on the other hand, people who are too easily convinced that this AI is a genius and will soon be able to do all the brainwork in the world.

[–] HedyL@awful.systems 3 points 14 hours ago

Still wondering what really happened here. A dark pattern in the app? Or some kind of technical glitch? It it was a dark pattern, has it been changed since then? Has anybody posted screenshots or a video of the steps users need to take to make their chats public? I'm most definitely not going to install the app myself just to try it out.

[–] HedyL@awful.systems 5 points 1 day ago

These systems are incredibly effective at mirroring whatever you project onto it back at you.

Also, it has often been pointed out that toxic people (from school bullies and domestic abusers up to cult leaders and dictators) often appear to operate from similar playbooks. Of course, this has been reflected in many published works (both fictional and non-fictional) and can also be observed in real time on social media, online forums etc. Therefore, I think it isn't surprising when a well-trained LLM "picks up" similar strategies (this is another reason - besides energy consumption - why I avoid using chatbots "just for fun", by the way).

Of course, "love bombing" is a key tool employed by most abusers, and chatbots appear to be particularly good at doing this, as you pointed out (by telling people what they want to hear, mirroring their thoughts back to them etc.).

[–] HedyL@awful.systems 2 points 2 days ago (1 children)

Some of the comments on this topic remind me a bit of the days when people insisted that Google could only ever be the “good guy” because Google had been sued by big publishing companies in the past (and the big publishers didn't look particularly good in some of these cases). So now, conversely, some people seem to assume that Disney must always be the only “bad guy” no matter what the other side does (and who else the other side had harmed besides Disney).

[–] HedyL@awful.systems 14 points 3 days ago (4 children)

I guess the main question here is: Would their business model remain profitable even after licensing fees to Disney and possibly a lot of other copyright holders?

[–] HedyL@awful.systems 13 points 4 days ago

From what I've heard, it's often also the people tasked with ghostwriting the LinkedIn posts of the members of the C-suite, among other things (while not necessarily being highly paid/high in the pecking order themselves).

[–] HedyL@awful.systems 7 points 1 week ago

In the past, people had to possess a degree of criminal energy to become halfway convincing scammers. Today, a certain amount of laziness is enough. I'm really glad that at least in one place there are now serious consequences for this.

[–] HedyL@awful.systems 7 points 1 week ago* (last edited 1 week ago)

This is just naive web crawling: Crawl a page, extract all the links, then crawl all the links and repeat.

It's so ridiculous - supposedly these people have access to a super-smart AI (which is supposedly going to take all our jobs soon), but the AI can't even tell them which pages are worth scraping multiple times per second and which are not. Instead, they appear to kill their hosts like maladapted parasites regularly. It's probably not surprising, but still absurd.

Edit: Of course, I strongly assume that the scrapers don't use the AI in this context (I guess they only used it to write their code based on old Stackoverflow posts). Doesn't make it any less ridiculous though.

[–] HedyL@awful.systems 8 points 1 week ago* (last edited 1 week ago) (3 children)

Even if it's not the main topic of this article, I'm personally pleased that RationalWiki is back. And if the AI bots are now getting the error messages instead of me, then that's all the better.

Edit: But also - why do AI scrapers request pages that show differences between versions of wiki pages (or perform other similarly complex requests)? What's the point of that anyway?

[–] HedyL@awful.systems 10 points 2 weeks ago

Under the YouTube video, somebody just commented that they believe that in the end, the majority of people is going to accept AI slop anyway, because that's just how people are. Maybe they're right, but to me it seems that sometimes, the most privileged people are the ones who are the most impressed by form over substance, and this seems to be the case with AI at the moment. I don't think this necessarily applies to the population as a whole, though. The possibility that oligopolistic providers such as Google might eventually leave them with no other choice by making reliable search results almost unreachable is another matter.

[–] HedyL@awful.systems 6 points 2 weeks ago

I'm not surprised that this feature (which was apparently introduced by Canva in 2019) is AI-based in some way. It was just never marketed as such, probably because in 2019, AI hadn't become a common buzzword yet. It was simply called “background remover” because that's what it does. What I find so irritating is that these guys on LinkedIn not only think this feature is new and believe it's only possible in the context of GenAI, but apparently also believe that this is basically just the final stepping stone to AI world domination.

[–] HedyL@awful.systems 22 points 2 weeks ago (3 children)

This somehow reminds me of a bunch of senior managers in corporate communications on LinkedIn who got all excited over the fact that with GenAI, you can replace the background of an image with something else! That's never been seen before, of course! I'm assuming that in the past, these guys could never be bothered to look into tools as widespread as Canva, where a similar feature had been present for many years (before the current GenAI hype, I believe, even if the feature may use some kind of AI technology - I honestly don't know). Such tools are only for the lowly peasants, I guess - and quite soon, AI is going to replace all the people who know where to click to access a feature like "background remover", anyway!

view more: next ›