I feel like there's got to be a surreal horror movie in there somewhere. Like an AI-assisted Videodrome or something.
YourNetworkIsHaunted
This isn't studying possible questions, this is memorizing the answer key to the test and being able to identify that the answer to question 5 is "17" but not being able to actually answer it when they change the numbers slightly.
God I remember having to cite RFC at other vendors when I worked in support and it was never not a pain in the ass to try and find the right line that described the appropriate feature. And then when I was done I knew I sounded like this even as I hit send anyway.
It's kind of a shame to have to downgrade Gary to "not wrong, but kind of a dick" here. Especially because his sneer game as shown at the end there is actually not half bad.
Another winner from Zitron. One of the things I learned working in tech support is that a lot of people tend to assume the computer is a magic black box that relies on terrible, secret magicks to perform it's dark alchemy. And while it's not that the rabbit hole doesn't go deep, there is a huge difference between the level of information needed to do what I did and the level of information needed to understand what I was doing.
I'm not entirely surprised that business is the same way, and I hope that in the next few years we have the same epiphany about government. These people want you to believe that you can't do what they do so that you don't ask the incredibly obvious questions about why it's so dumb. At least in tech support I could usually attribute the stupidity to the limitations of computers and misunderstandings from the users. I don't know what kinda excuse the business idiots and political bullshitters are going to come up with.
In a world where technofascism stalks the halls of power like a fedora-wearing xenomorph it is good to see a reminder of the original context of these discussions: making Yudkowsky and friends feel important without ever actually doing anything important.
One of the YouTube comments was actually kind of interesting in trying to think through just how wildly you would need to change the creative process in order to allow for the quirks and inadequacies of this "tool". It really does seem like GenAI is worse than useless for any kind of artistic or communicative project. If you have something specific you want to say or you have something specific you want to create the outputs of these tools are not going to be that, no matter how carefully you describe it in the prompt. Not only that, but the underlying process of working in pixels, frames, or tokens natively, rather than as a consequence of trying to create objects, motions, or ideas, means that those outputs are often not even a very useful starting point.
This basically leaves software development and spam as the only two areas I can think of where GenAI has a potential future, because they're the only fields where the output being interpretable by a computer is just as if not more important than whatever its actual contents are.
It's also a case where I think the lack of intentionality hurts. I'm reminded of the way the YouTube algorithm contributed to radicalization by feeding people steadily more extreme versions of what they had already selected. The algorithm was (and is) just trying to pick the video that you would most likely click on next, but in so doing it ended up pushing people down the sales funnel towards outright white supremacy because what videos you were shown actually impacted which video you would choose to click next. Of course since the videos were user-supplied content they started taking advantage of that tendency with varying degrees of success, but the algorithm itself wasn't "secretly fascist" and in the same way would, over time, push people deeper into other rabbit holes, whether that meant obscure horror games, increasingly unhinged rage video collections, and generally everything that was once called "the weird part of YouTube."
ChatGPT and other bots don't have failed academics and comedians trying to turn people into Nazis, but it does have a similar lack of underlying anything, and that means that unlike a cult with a specific ideology it's always trying to create the next part of the story you most want to hear. We've seen versions of this that go down a conspiracy thriller route, a cyberpunk route, a Christian eschatology route, even a romance route. Like, it's pretty well known that there are 'cult hoppers' who will join a variety of different fringe groups because there's something about being in a fringe group that they're attracted to. But there are also people who will never join scientology, or the branch davidians, or CrossFit, but might sign on with Jonestown or QAnon with the right prompting. LLMs, by virtue of trying to predict the next series of tokens rather than actually having any underlying thoughts, will, on a long enough timeframe, lead people down any rabbit hole they might be inclined to follow, and for a lot of people - even otherwise mentally healthy people - that includes a lot of very dark and dangerous places.
The folks over at futurism are continuing to do their damnedest to spotlight the ongoing mental health crisis being spurred by chatbot sycophants.
I think the real problem this poses for OpenAI is that in order to address it they basically need to back out of their entire sales pitch. Like, these are basically people who fully believe the hype and it pretty clearly is part of sending them down a very bad road.
That's fucking abominable. I was originally going to ask why anyone would bother throwing their slop on Newgrounds of all sites, but given the business model here I think we can be pretty confident they were hoping to use it to advertise.
Also, fully general bullshit detection question no.142 applies: if this turnkey game studio works as well as you claim, why are you selling it to me instead of doing it yourself? (Hint: it's because it doesn't actually work)
I also feel like while it's absolutely true that the whole "we'll make AGI and get a ton of money" narrative was always bullshit (whether or not anyone relevant believed it) it is also another kind of evil. Like, assuming we could reach a sci-fi vision of AGI just as capable as a human being, the primary business case here is literally selling (or rather, licensing out) digital slaves. Like, if they did believe their own hype and weren't grifting their hearts out then they're a whole different class of monster. From an ethical perspective, the grift narrative lets everyone involved be better people.
This ties back into the recurring question of drawing boundaries around "AI" as a concept. Too many people just blithely accept that it's just a specific set of machine learning techniques applied to sufficiently large sets of data. This in spite of the fact that we're several AI "cycles" deep where every 30 years or so (whenever it stops being "retro") some new algorithm or mechanism is definitely going to usher in Terminator II: Judgement Day.
This narrow frame focused on LLMs still allows for some discussion of the problems we're seeing (energy use, training data sourcing, etc) but it cuts off a lot of the wider conversations about the social, political, and economic causes and impacts of outsourcing the business of being human to a computer.