piggy

joined 2 days ago
[–] piggy@hexbear.net 7 points 1 hour ago* (last edited 1 hour ago)

Yeah trap music is a good example shit was everywhere for a while.

Indie sleaze is another.

Mumblecore movies.

"Prestige TV" literally ongoing.

[–] piggy@hexbear.net 2 points 1 hour ago

I agree that anti-intellectualism is bad, but I wouldn't necessarily consider being AI negative by default, a form of anti-intellectualism. It's the same thing as people who are negative on space exploration. It's a symptom where it seems that there is infinite money for things that are fad/scams/bets, things that have limited practical use in people's lives, and ultimately not enough to support people.

That's really where I see those arguments coming from. AI is quite honestly a frivolity in a society where housing is a luxury.

[–] piggy@hexbear.net 9 points 1 hour ago* (last edited 1 hour ago) (1 children)

I think the need to have a shared monoculture is a deeply reactionary way of thinking that prevents us from developing human empathy. You don't need to say "Bazinga" at the same time as another person in order for you to relate to, care for, and understand strangers. I think the yearning for monoculture in people 25-40 is a mirror of boomers who complain that they cannot relate to kids anymore because nobody really believes in the pledge of alligence or some such other "things r different" nonsense. Yeah I haven't played Hoop and Stick 3, we don't need to play the same video games to relate to each other.

It's a crutch for a brutal culture where you are too scared to show a modicum of openness or vulnerability with other humans because deep down you need to be reassured that they won't scam/harm you simply because they believe in the magic words of Burgerstan. People are uncomfortable with change and things they don't know because we've built a society where change often begets material vulnerability in people, and information and even cultural media have become a weapon to be used against others.

Monoculture was never good, it simply was. Also despite this being a real aesthetic trend, you should also remember that the vast majority of consumer technology produced at the same time was not clear plastic tech. If anything the monoculture of tech products of that era was that gross beige that yellows in about a year or two. It's just not aesthetic enough to remember, and in 10 years everything just defaulted black. I've actually never seen a clear plastic Dreamcast/ Dreamcast controller IRL. I've been a tech guy forever and despite knowing about it, I only know of 1 person that had actually experienced the Dreamcast internet. This is very much nostalgia bait vs actual how things were.

To put it into perspective for one of those phones with clear plastic, there were 10,000 of these

[–] piggy@hexbear.net 4 points 2 hours ago* (last edited 2 hours ago) (2 children)

I should have been more precise, but this is all in the context of news about a cutting-edge LLM using a fraction of the cost of ChatGPT, and comments calling it all "reactionary autocorrect" and "literally reactionary by design".

I disagree that it's "reactionary by design". I agree that it's usage is 90% reactionary. Many companies are effectively trying to use it in a way that attempts to reinforce their deteriorating status quo. I work in software so I always see people calling this shit a magic wand to problems of the falling rate of profit and the falling rate of production. I'll give you an extrememly common example that i've seen across multiple companies an industries.

Problem: Modern companies do not want to be responsible for the development and education of their employees. They do not want to pay for the development of well functioning specialized tools for the problems their company faces. They see it as a money and time sink. This often presents itself as:

  • missing, incomplete, incorrect documentation
  • horrible time wasting meeting practices

I've seen the following be pitched as AI Bandaids:

Proposal: push all your documentation into a RAG LLM so that users simply ask the robot and get what they want

Reality: The robot hallucinates things that aren't there in technical processes. Attempts to get the robot to correct this involves the robot sticking to marketing style vagaries that aren't even grounded in the reality of how the company actually works (things as simple as the robot assuming how a process/team/division is organized rather than the reality). Attempts to simply use it as a semantic search index end up linking to the real documentation which is garbage to begin with and doesn't actually solve anyone's real problems.

Proposal: We have too many meetings and spend ~4 hours on zoom. Nobody remembers what happens in the meetings, nobody takes notes, it's almost like we didn't have them at all. We are simply not good at working meetings and it's just chat sessions where the topic is the project. We should use AI features to do AI summaries of our meetings.

Reality: The AI summaries cannot capture action items correctly if at all. The AI summaries are vague and mainly result in metadata rather than notes of important decisions and plans. We are still in meetings for 4 hours a day, but now we just copypasta useless AI summaries all over the place.

Don't even get me started on CoPilot and code generation garbage. Or making "developers productive". It all boils down to a million monkey problem.

These are very common scenarios that I've seen that ground the use of this technology in inherently reactionary patterns of social reproduction. By the way I do think DeepSeek and Duobao are an extremely important and necessary step because it destroys the status quo of Western AI development. AI in the West is made to be inefficient on purpose because it limits competition. The fact that you cannot run models locally due to their incredible size and compute demand is a vendor lock-in feature that ensures monetization channels for Western companies. The PayGo model bootstraps itself.

[–] piggy@hexbear.net 9 points 2 hours ago* (last edited 2 hours ago)

The problem that symbolic AI systems ran into in the 70s are precisely the ones that deep neural networks address.

Not in any meaningful way. A statistical model cannot address the Frame problem. Statistical models themselves exacerbate the problems of connectionist approaches. I think AI researchers aren't being honest with the causality here. We are simply fooling ourselves and willfully misinterpreting statistical correlation as causality.

You're right there are challenges, but there's absolutely no reason to think they're insurmountable.

Let me repeat myself for clarity. We do not have a valid general theory of mind. That means we do not have a valid explanation of the process of thinking itself. That is an insurmountable problem that isn't going to be fixed by technology itself because technology cannot explain things, technology is constructed processes. We can use technology to attempt to build a theory of mind, but we're building the plane while we're flying it here.

I'd argue that using symbolic logic to come up with solutions is very much what reasoning is actually.

Because you are a human doing it, you are not a machine that has been programmed. That is the difference. There is no algorithm that gives you correct reasoning every time. In fact using pure reasoning often leads to lulzy and practically incorrect ideas.

Somehow you have to take data from the senses and make sense of it. If you're claiming this is garbage in garbage out process, then the same would apply to human reasoning as well.

It does. Ben Shapiro is a perfect example. Any debate guy is. They're really good at reasoning and not much else. Like read the Curtis Yarvin interview in the NYT. You'll see he's really good at reasoning, so good that he accidentally makes some good points and owns the NYT at times. But more often than not the reasoning ends up in a horrifying place that isn't actually novel or unique simply a rehash of previous horriyfing things in new wrappers.

The models can create internal representations of the real world through reinforcement learning in the exact same way that humans do. We build up our internal world model through our interaction with environment, and the same process is already being applied in robotics today.

This is a really Western brained idea of how our biology works, because as complex systems we work on inscrutable ranges. For example lets take some abstract "features" of the human experience and understand how they apply to robots:

  • Strength. We cannot build a robot that can get stronger over time. Humans can do this, but we would never build a robot to do this. We see this as inefficient and difficult. This is a unique biological aspect of the human experience that allows us to reason about the physical world.

  • Pain. We would not build a robot that experiences pain in the same way as humans. You can classify pain inputs. But why would you build a machine that can "understand" pain. Where pain interrupts its processes? This is again another unique aspect of human biology that allows us to reason about the physical world.

[–] piggy@hexbear.net 6 points 3 hours ago* (last edited 3 hours ago) (4 children)

drug discovery

This is mainly hype. The process of creating AI has been useful for drug discovery, LLMs as people practically know them (e.g. ChatGBT) have not other than the same kind of sloppy labor corner cost cutting bullshit.

If you read a lot of the practical applications in the papers it's mostly publish or perish crap where they're gushing about how drug trials should be like going to cvs.com where you get a robot and you can ask it to explain something to you and it spits out the same thing reworded 4-5 times.

They're simply pushing consent protocols onto robots rather than nurses, which TBH should be an ethical violation.

[–] piggy@hexbear.net 10 points 3 hours ago* (last edited 3 hours ago) (2 children)

Neurosymbolic AI is overhyped. It's just bolting on LLMs to symbolic AI and pretending that it's a "brand new thing" (it's not, it's actually how most LLMs practically work today and have been for a long time GPT-3 itself is neurosymbolic). The advocates of approach pretend that the "reasoning" comes from symbolic AI which is known as classical AI, which still suffers from the same exact problems that it did in the 1970's when the first AI winter happened. Because we do not have an algorithm capable of representing the theory of mind, nor do we have a realistic theory of mind to begin with.

Not only that but all of the integration points between classical techniques and statistical techniques present extreme challenges because in practice the symbolic portion essentially trusts the output of the statistical portion because the symbolic portion has limited ability to validate.

Yeah you can teach ChatGPT to correctly count the r's in strawberry with a neurosymbolic approach but general models won't be able to reasonably discover even the most basic of concepts such as volume displacement by themselves.

You're essentially back at the same problem where you either lean on the symbolic aspects and limit yourself entirely to advanced ELIZA like functionality that can just use classifier or your throw yourself to the mercy of the statistical model and pray you have enough symbolic safeguards.

Either way it's not reasoning, it is at best programming -- if that. That's actually the practical reason why the neurosymbolic space is getting attention because the problem has effectively been to be able to control inputs and outputs for the purposes of not only reliability / accuracy but censorship and control. This is still a Garbage In Garbage Out process.

FYI most of the big names in the "Neurosymbolic AI as the next big thing" space hitched their wagon to Khaneman's Thinking Fast and Slow bullshit that is effectively made up bullshit like Freudianism but lamer and has essentially been squad wiped by the replication crisis.

Don't get me wrong DeepSeek and Duobau are steps in the right direction. They're less proprietary, less wasteful, and broadly more useful, but they aren't a breakthrough in anything but capitalist hoarding of technological capacity.

The reason AI is not useful in most circumstance is because of the underlying problems of the real world and you can't algorithm your way out of people problems.

[–] piggy@hexbear.net 8 points 5 hours ago

Suffer not the heretic to live.

My armor is contempt. My shield is disgust. My sword is hatred. In the Emperor's name, let none survive.

Real grimdank hours going on here.

[–] piggy@hexbear.net 23 points 7 hours ago* (last edited 7 hours ago) (1 children)

but.but but death-to-the-poor works so well on poor people. why isn't death-to-the-poor working on nazis!!??!?!

[–] piggy@hexbear.net 9 points 16 hours ago* (last edited 16 hours ago) (2 children)

Dawn of War 1. Better yet just computerize table top. I'm antisocial :(

130
submitted 16 hours ago* (last edited 16 hours ago) by piggy@hexbear.net to c/slop@hexbear.net
 

uneasy anime battle music plays Lightning crackles around AOC as she writes an epic blue sky post and closes her laptop.

She goes to work the next day and lets a group of the most evil Amerikkkans be racist against the only Muslim and Palestinian women in congress.

Later in the Rotunda...

uneasy anime battle music plays AOC locks eyes with Loren Bobert and and squints as they face off.

"You're so fucking lucky that you didn't hurt my friends."

[–] piggy@hexbear.net 7 points 16 hours ago

I was a really young nerdy kid, and coming from the Soviet Union like the only thing I cared about was computer. I was obsessed with computer since playing Doom as a kid in a cyber cafe. I got my first computer at the age of 8-9 after we had immigrated. I was about 10 years old when I was trolling AOL chat rooms by myself.... and I had a lovely 640x480 web cam.... and yeah. A lot of this brings up uneasy memories.

I think the horny categorization does fit me. I'm not like a gooner or anything but my partner would agree 100% with the statement: "thinks I'm about to pounce on them and always waits for me to initiate everything. Why people basically see horny as one of my personality traits."

I don't experience issues with non-sexual intimacy, but I wanted to let you know that you're not alone!

[–] piggy@hexbear.net 43 points 23 hours ago* (last edited 23 hours ago) (1 children)

General Strikes are escalated to not planned. That's why the AFL the most idiotic union basically banned escalations into a general strike by requiring striking locals to have national authorization or risk getting kicked out of the union. This was in response to the Seattle General Strike which happened in Feb 1919 and the AFL amended the constition in June 1919. Similary Taft-Hartley which outlaws general strikes in the US was passed in 1947 was a response to the Oakland General Strike of 1946.

Also lol at #3 what is this? 2012?

view more: next ›