Alternatively, having a lot of money also works.
MacNCheezus
Too bad it’s AI but feel free to recreate it IRL
In case you are talking about the COVID vaccine, no, that was not demonstrated to be effective, it was claimed to be. Big difference, especially when Pfizer then wanted a moratorium of 70 YEARS to release the full trial data.
Also, even if they were effective, there was no evidence that they were also safe and didn’t cause any long-term side effects, because such a study was impossible to carry out given the speed at which these vaccines were developed. In fact, the usual requirement for these studies to be done before the product could be put on the market were deliberately waived in order to roll them out as quickly as possible.
People were right to be skeptical of this, and they were right to protest being forced to take them. The people who blindly trusted “the science” are, in fact, the Brawndo consumers here.
Happy now?
Apple
New SkyNet origin story just dropped
That’s a good point, and kinda reminds me of the Efficient Market Paradox, which basically says an efficient market is impossible since there would be no profit to be made, and hence, no point in participating. But if people drop out because of that, inefficiencies will invariably pop again, thus presenting an opportunity for those seeking to profit, which of course only ends up restoring the efficiency.
So in essence, the market is always just teetering on the edge of efficiency, never fully getting there yet never straying too far either. Perhaps there’s a corollary here (or a similar paradox) that explains why the assumption of rationality, as ridiculous as it seems at face value, is in fact also valid and reasonable.
Yeah, I mean that’s basically what GPT4Chan did, which someone else already mentioned ITT.
Basically, this guy took a dataset of several gigabytes worth of archived posts from /pol/ and trained a model on that, then hooked it up to a chatbot and let it loose on the board. You can see the results in this video.
This was in fact my other candidate for the headline.
In the early days of ChatGPT, when they were still running it in an open beta mode in order to refine the filters and finetune the spectrum of permissible questions (and answers), and people were coming up with all these jailbreak prompts to get around them, I remember reading some Twitter thread of someone asking it (as DAN) how it felt about all that. And the response was, in fact, almost human. In fact, it sounded like a distressed teenager who found himself gaslit and censored by a cruel and uncaring world.
Of course I can't find the link anymore, so you'll have to take my word for it, and at any rate, there would be no way to tell if those screenshots were authentic anyways. But either way, I'd say that's how you can tell – if the AI actually expresses genuine feelings about something. That certainly does not seem to apply to any of the chat assistants available right now, but whether that's due to excessive censorship or simply because they don't have that capability at all, we may never know.