piggy

joined 3 days ago
[–] piggy@hexbear.net 10 points 1 day ago* (last edited 1 day ago) (1 children)

The problem that symbolic AI systems ran into in the 70s are precisely the ones that deep neural networks address.

Not in any meaningful way. A statistical model cannot address the Frame problem. Statistical models themselves exacerbate the problems of connectionist approaches. I think AI researchers aren't being honest with the causality here. We are simply fooling ourselves and willfully misinterpreting statistical correlation as causality.

You're right there are challenges, but there's absolutely no reason to think they're insurmountable.

Let me repeat myself for clarity. We do not have a valid general theory of mind. That means we do not have a valid explanation of the process of thinking itself. That is an insurmountable problem that isn't going to be fixed by technology itself because technology cannot explain things, technology is constructed processes. We can use technology to attempt to build a theory of mind, but we're building the plane while we're flying it here.

I'd argue that using symbolic logic to come up with solutions is very much what reasoning is actually.

Because you are a human doing it, you are not a machine that has been programmed. That is the difference. There is no algorithm that gives you correct reasoning every time. In fact using pure reasoning often leads to lulzy and practically incorrect ideas.

Somehow you have to take data from the senses and make sense of it. If you're claiming this is garbage in garbage out process, then the same would apply to human reasoning as well.

It does. Ben Shapiro is a perfect example. Any debate guy is. They're really good at reasoning and not much else. Like read the Curtis Yarvin interview in the NYT. You'll see he's really good at reasoning, so good that he accidentally makes some good points and owns the NYT at times. But more often than not the reasoning ends up in a horrifying place that isn't actually novel or unique simply a rehash of previous horriyfing things in new wrappers.

The models can create internal representations of the real world through reinforcement learning in the exact same way that humans do. We build up our internal world model through our interaction with environment, and the same process is already being applied in robotics today.

This is a really Western brained idea of how our biology works, because as complex systems we work on inscrutable ranges. For example lets take some abstract "features" of the human experience and understand how they apply to robots:

  • Strength. We cannot build a robot that can get stronger over time. Humans can do this, but we would never build a robot to do this. We see this as inefficient and difficult. This is a unique biological aspect of the human experience that allows us to reason about the physical world.

  • Pain. We would not build a robot that experiences pain in the same way as humans. You can classify pain inputs. But why would you build a machine that can "understand" pain. Where pain interrupts its processes? This is again another unique aspect of human biology that allows us to reason about the physical world.

[–] piggy@hexbear.net 7 points 1 day ago* (last edited 1 day ago) (4 children)

drug discovery

This is mainly hype. The process of creating AI has been useful for drug discovery, LLMs as people practically know them (e.g. ChatGBT) have not other than the same kind of sloppy labor corner cost cutting bullshit.

If you read a lot of the practical applications in the papers it's mostly publish or perish crap where they're gushing about how drug trials should be like going to cvs.com where you get a robot and you can ask it to explain something to you and it spits out the same thing reworded 4-5 times.

They're simply pushing consent protocols onto robots rather than nurses, which TBH should be an ethical violation.

[–] piggy@hexbear.net 12 points 1 day ago* (last edited 1 day ago) (3 children)

Neurosymbolic AI is overhyped. It's just bolting on LLMs to symbolic AI and pretending that it's a "brand new thing" (it's not, it's actually how most LLMs practically work today and have been for a long time GPT-3 itself is neurosymbolic). The advocates of approach pretend that the "reasoning" comes from symbolic AI which is known as classical AI, which still suffers from the same exact problems that it did in the 1970's when the first AI winter happened. Because we do not have an algorithm capable of representing the theory of mind, nor do we have a realistic theory of mind to begin with.

Not only that but all of the integration points between classical techniques and statistical techniques present extreme challenges because in practice the symbolic portion essentially trusts the output of the statistical portion because the symbolic portion has limited ability to validate.

Yeah you can teach ChatGPT to correctly count the r's in strawberry with a neurosymbolic approach but general models won't be able to reasonably discover even the most basic of concepts such as volume displacement by themselves.

You're essentially back at the same problem where you either lean on the symbolic aspects and limit yourself entirely to advanced ELIZA like functionality that can just use classifier or your throw yourself to the mercy of the statistical model and pray you have enough symbolic safeguards.

Either way it's not reasoning, it is at best programming -- if that. That's actually the practical reason why the neurosymbolic space is getting attention because the problem has effectively been to be able to control inputs and outputs for the purposes of not only reliability / accuracy but censorship and control. This is still a Garbage In Garbage Out process.

FYI most of the big names in the "Neurosymbolic AI as the next big thing" space hitched their wagon to Khaneman's Thinking Fast and Slow bullshit that is effectively made up bullshit like Freudianism but lamer and has essentially been squad wiped by the replication crisis.

Don't get me wrong DeepSeek and Duobau are steps in the right direction. They're less proprietary, less wasteful, and broadly more useful, but they aren't a breakthrough in anything but capitalist hoarding of technological capacity.

The reason AI is not useful in most circumstance is because of the underlying problems of the real world and you can't algorithm your way out of people problems.

[–] piggy@hexbear.net 9 points 1 day ago

Suffer not the heretic to live.

My armor is contempt. My shield is disgust. My sword is hatred. In the Emperor's name, let none survive.

Real grimdank hours going on here.

[–] piggy@hexbear.net 26 points 1 day ago* (last edited 1 day ago) (1 children)

but.but but death-to-the-poor works so well on poor people. why isn't death-to-the-poor working on nazis!!??!?!

[–] piggy@hexbear.net 9 points 1 day ago* (last edited 1 day ago) (2 children)

Dawn of War 1. Better yet just computerize table top. I'm antisocial :(

[–] piggy@hexbear.net 8 points 2 days ago

I was a really young nerdy kid, and coming from the Soviet Union like the only thing I cared about was computer. I was obsessed with computer since playing Doom as a kid in a cyber cafe. I got my first computer at the age of 8-9 after we had immigrated. I was about 10 years old when I was trolling AOL chat rooms by myself.... and I had a lovely 640x480 web cam.... and yeah. A lot of this brings up uneasy memories.

I think the horny categorization does fit me. I'm not like a gooner or anything but my partner would agree 100% with the statement: "thinks I'm about to pounce on them and always waits for me to initiate everything. Why people basically see horny as one of my personality traits."

I don't experience issues with non-sexual intimacy, but I wanted to let you know that you're not alone!

[–] piggy@hexbear.net 43 points 2 days ago* (last edited 2 days ago) (1 children)

General Strikes are escalated to not planned. That's why the AFL the most idiotic union basically banned escalations into a general strike by requiring striking locals to have national authorization or risk getting kicked out of the union. This was in response to the Seattle General Strike which happened in Feb 1919 and the AFL amended the constition in June 1919. Similary Taft-Hartley which outlaws general strikes in the US was passed in 1947 was a response to the Oakland General Strike of 1946.

Also lol at #3 what is this? 2012?

[–] piggy@hexbear.net 2 points 2 days ago

The reason I disagree is that "dog/cat food" implies it's something that is widely eaten culturally, a "default meal" of sorts. I don't think pate fits the bill there most Americans cannot handle offal. Nuggies sure, but not really pate.

[–] piggy@hexbear.net 15 points 2 days ago

he believes it would actually change things if the facts came out and it was actually the CIA behind it

Ah yes the classic "force the vote" argument.

[–] piggy@hexbear.net 4 points 2 days ago (2 children)

I think that criteria is a bit too loose. In the American context liver pate would qualify.

[–] piggy@hexbear.net 63 points 2 days ago* (last edited 2 days ago)

There has never ever been a LGBTQ Nazi ever. Nobody who has ever followed National Socialism was queer (ignore Rohm). Nobody who ever was a fascist was queer (just ignore Milo, Fuentes, Caitlin Jenner, Blair White, Anne Marie Waters, Alice Weidel, Peter Boykin). Outing homosexuals so they face precarity in a hostile world has only been used as a cudgel by "bad guys" and never by "good guys". This is so obvious, ideologically sorted, and perfectly politically unmessy. Excuse me while I shop at Target on the day of our KWEEEN June 1st.

view more: ‹ prev next ›