BlueMonday1984

joined 1 year ago
[–] BlueMonday1984@awful.systems 11 points 3 weeks ago (2 children)

In somewhat lighter news, Fortnite added Darth Vader to the game, and gave him a "conversational AI" to let him talk to players in the voice of James Earl Jones (who I just discovered died last year).

To nobody's surprise, gamers have already gotten the AI Vader swearing and yelling slurs.

[–] BlueMonday1984@awful.systems 9 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

New piece from Brian Merchant: De-democratizing AI, which is primarily about the GOP's attempt to ban regulations on AI, but also touches on the naked greed and lust for power at the core of the AI bubble.

EDIT: Also, that title's pretty clever

[–] BlueMonday1984@awful.systems 20 points 3 weeks ago (10 children)

The Torment Nexus brings us new and horrifying things today - a UN initiative has tried using chatbots for humanitarian efforts. I'll let Dr. Abeba Birhane's horrified reaction do the talking:

this just started and i'm already losing my mind and screaming

Western white folk basically putting an AI avatar on stage and pretending it is a refugee from sudan — literally interacting with it as if it is a “woman that fled to chad from sudan”

just fucking shoot me

Giving my take on this matter, this is gonna go down in history as an exercise in dehumanisation dressed up as something more kind, and as another indictment (of many) against the current AI bubble, if not artificial intelligence as a concept.

[–] BlueMonday1984@awful.systems 4 points 3 weeks ago

It was me, I stole all the sexy robots

[–] BlueMonday1984@awful.systems 10 points 3 weeks ago* (last edited 3 weeks ago) (3 children)

Okay, two separate thoughts here:

  1. Paul G is so fucking close to getting it, Christ on a bike
  2. How the fuck do you get burned by someone as soulless as Sam Altman
[–] BlueMonday1984@awful.systems 40 points 3 weeks ago

Musk says: “At times, I think Grok-3 is kind of scary smart.” Grok is just remixing its training data — but a stochastic parrot is still more reality-based than Elon Musk. [Bloomberg, archive]

If someone roasted me such such surgical precision like that, I'd delete my entire Internet presence out of shame. God damn.

[–] BlueMonday1984@awful.systems 8 points 3 weeks ago (2 children)

That opens you up to getting accused of click fraud, as AdNauseam found out the hard way but its worth it if you can squeeze some cash out of them before that happens.

[–] BlueMonday1984@awful.systems 12 points 3 weeks ago (5 children)

Sentiment analysis surrounding AI suggests sneers are gonna moon pretty soon. Good news for us, since we've been stacking sneers for a while.

[–] BlueMonday1984@awful.systems 8 points 3 weeks ago

Recently stumbled upon an anti-AI mutual aid/activism group that's being set up, I suspect some of you will be interested.

[–] BlueMonday1984@awful.systems 13 points 3 weeks ago (1 children)

xAI has applied for permits for the first set of turbines. But it won’t install pollution controls unless and until its permits are approved. At that point, xAI will be “the lowest-emitting facility in the country,” allegedly.

Musk probably sees gassing black people as a free bonus for installing the turbines, I strongly doubt he's installing pollution controls.

[–] BlueMonday1984@awful.systems 7 points 3 weeks ago

As a famous swindler once said, there's a sucker born every minute.

 

I've been hit by inspiration whilst dicking about on Discord - felt like making some off-the-cuff predictions on what will happen once the AI bubble bursts. (Mainly because I had a bee in my bonnet that was refusing to fuck off.)

  1. A Full-Blown Tech Crash

Its no secret the industry's put all their chips into AI - basically every public company's chasing it to inflate their stock prices, NVidia's making money hand-over-fist playing gold rush shovel seller, and every exec's been hyping it like its gonna change the course of humanity.

Additionally, going by Baldur Bjarnason, tech's chief goal with this bubble is to prop up the notion of endless growth so it can continue reaping the benefits for just a bit longer.

If and when the tech bubble pops, I expect a full-blown crash in the tech industry (much like Ed Zitron's predicting), with revenues and stock prices going through the floor and layoffs left and right. Additionally, I'm expecting those stock prices will likely take a while to recover, if ever, as tech likely comes to be viewed either as a stable, mature industry that's no longer experiencing nonstop growth.

Chance: Near-Guaranteed. I'm pretty much certain on this, and expect it to happen sometime this year.

  1. A Decline in Tech/STEM Students/Graduates

Extrapolating a bit from Prediction 1, I suspect we might see a lot less people going into tech/STEM degrees if tech crashes like I expect.

The main thing which drew so many people to those degrees, at least from what I could see, was the notion that they'd make you a lotta money - if tech publicly crashes and burns like I expect, it'd blow a major hole in that notion.

Even if it doesn't kill the notion entirely, I can see a fair number of students jumping ship at the sight of that notion being shaken.

Chance: Low/Moderate. I've got no solid evidence this prediction's gonna come true, just a gut feeling. Epistemically speaking, I'm firing blind.

  1. Tech/STEM's Public Image Changes - For The Worse

The AI bubble's given us a pretty hefty amount of mockery-worthy shit - Mira Murati shitting on the artists OpenAI screwed over, Andrej Karpathy shitting on every movie made pre-'95, Sam Altman claiming AI will soon solve all of physics, Luma Labs publicly embarassing themselves, ProperPrompter recreating motion capture, But Worse^tm, Mustafa Suleyman treating everything on the 'Net as his to steal, et cetera, et cetera, et fucking cetera.

All the while, AI has been flooding the Internet with unholy slop, ruining Google search, cooking the planet, stealing everyone's work (sometimes literally) in broad daylight, supercharging scams, killing livelihoods, exploiting the Global South and God-knows-what-the-fuck-else.

All of this has been a near-direct consequence of the development of large language models and generative AI.

Baldur Bjarnason has already mentioned AI being treated as a major red flag by many - a "tech asshole" signifier to be more specific - and the massive disconnect in sentiment tech has from the rest of the public. I suspect that "tech asshole" stench is gonna spread much quicker than he thinks.

Chance: Moderate/High. This one's also based on a gut feeling, but with the stuff I've witnessed, I'm feeling much more confident with this than Prediction 2. Arguably, if the cultural rehabilitation of the Luddites is any indication, it might already be happening without my knowledge.

If you've got any other predictions, or want to put up some criticisms of mine, go ahead and comment.

 

Damn nice sneer from Charlie Warzel in this one, taking a direct shot at Silicon Valley and its AGI rhetoric.

Archive link, to get past the paywall.

 

(Gonna expand on a comment I whipped out yesterday - feel free to read it for more context)


At this point, its already well known AI bros are crawling up everyone's ass and scraping whatever shit they can find - robots.txt, honesty and basic decency be damned.

The good news is that services have started popping up to actively cockblock AI bros' digital smash-and-grabs - Cloudflare made waves when they began offering blocking services for their customers, but Spawning AI's recently put out a beta for an auto-blocking service of their own called Kudurru.

(Sidenote: Pretty clever of them to call it Kudurru.)

I do feel like active anti-scraping measures could go somewhat further, though - the obvious route in my eyes would be to try to actively feed complete garbage to scrapers instead - whether by sticking a bunch of garbage on webpages to mislead scrapers or by trying to prompt inject the shit out of the AIs themselves.

The main advantage I can see is subtlety - it'll be obvious to AI corps if their scrapers are given a 403 Forbidden and told to fuck off, but the chance of them noticing that their scrapers are getting fed complete bullshit isn't that high - especially considering AI bros aren't the brightest bulbs in the shed.

Arguably, AI art generators are already getting sabotaged this way to a strong extent - Glaze and Nightshade aside, ChatGPT et al's slop-nami has provided a lot of opportunities for AI-generated garbage (text, music, art, etcetera) to get scraped and poison AI datasets in the process.

How effective this will be against the "summarise this shit for me" chatbots which inspired this high-length shitpost I'm not 100% sure, but between one proven case of prompt injection and AI's dogshit security record, I expect effectiveness will be pretty high.

 

After reading through Baldur's latest piece on how tech and the public view gen-AI, I've had some loose thoughts about how this AI bubble's gonna play out.

I don't have any particular structure to this, this is just a bunch of things I'm getting off my chest:

  1. AI's Dogshit Reputation

Past AI springs had the good fortune to have had no obvious negative externalities to sour the public's reputation (mainly because they weren't public facing, going by David Gerard).

This bubble, by comparison, has been pretty much entirely public facing, giving us, among other things:

All of these have done a lot of damage to AI's public image, to the point where its absence is an explicit selling point - damage which I expect to last for at least a decade.

When the next AI winter comes in, I'm expecting it to be particularly long and harsh - I fully believe a lot of would-be AI researchers have decided to go off and do something else, rather than risk causing or aggravating shit like this. (Missed this incomplete sentence on first draft)

  1. The Copyright Shitshow

Speaking of copyright, basically every AI company has worked under the assumption that copyright basically doesn't exist and they can yoink whatever they want without issue.

With Gen-AI being Gen-AI, getting evidence of their theft isn't particularly hard - as they're straight-up incapable of creativity, they'll puke out replicas of its training data with the right prompt.

Said training data has included, on the audio side, songs held under copyright by major music studios, and, on the visual side, movies and cartoons currently owned by the fucking Mouse..

Unsurprisingly, they're getting sued to kingdom come. If I were in their shoes, I'd probably try to convince the big firms my company's worth more alive than dead and strike some deals with them, a la OpenAI with Newscorp.

Given they seemingly believe they did nothing wrong (or at least Suno and Udio do), I expect they'll try to fight the suits, get pummeled in court, and almost certainly go bankrupt.

There's also the AI-focused COPIED act which would explicitly ban these kinds of copyright-related shenanigans - between getting bipartisan support and support from a lot of major media companies, chances are good it'll pass.

  1. Tech's Tainted Image

I feel the tech industry as a whole is gonna see its image get further tainted by this, as well - the industry's image has already been falling apart for a while, but it feels like AI's sent that decline into high gear.

When the cultural zeitgeist is doing a 180 on the fucking Luddites and is openly clamoring for AI-free shit, whilst Apple produces the tech industry's equivalent to the "face ad", its not hard to see why I feel that way.

I don't really know how things are gonna play out because of this. Taking a shot in the dark, I suspect the "tech asshole" stench Baldur mentioned is gonna be spread to the rest of the industry thanks to the AI bubble, and its gonna turn a fair number of people away from working in the industry as a result.

 

I don’t think I’ve ever experienced before this big of a sentiment gap between tech – web tech especially – and the public sentiment I hear from the people I know and the media I experience.

Most of the time I hear “AI” mentioned on Icelandic mainstream media or from people I know outside of tech, it’s being used as to describe something as a specific kind of bad. “It’s very AI-like” (“mjög gervigreindarlegt” in Icelandic) has become the talk radio short hand for uninventive, clichéd, and formulaic.

babe wake up the butlerian jihad is coming

39
submitted 11 months ago* (last edited 11 months ago) by BlueMonday1984@awful.systems to c/techtakes@awful.systems
 

I stopped writing seriously about “AI” a few months ago because I felt that it was more important to promote the critical voices of those doing substantive research in the field.

But also because anybody who hadn’t become a sceptic about LLMs and diffusion models by the end of 2023 was just flat out wilfully ignoring the facts.

The public has for a while now switched to using “AI” as a negative – using the term “artificial” much as you do with “artificial flavouring” or “that smile’s artificial”.

But it seems that the sentiment might be shifting, even among those predisposed to believe in “AI”, at least in part.

Between this, and the rise of "AI-free" as a marketing strategy, the bursting of the AI bubble seems quite close.

Another solid piece from Bjarnason.

view more: ‹ prev next ›