Ha, even by the standards of SCP fanfiction, the slop Geoff Lewis got it to churn out was bad and silly.
scruiser
He knows the connectionist have basically won (insofar as you can construe competing scientific theories and engineering paradigms as winning or losing... which is kind of a bad framing), so that is why he pushing the "neurosymbolic" angle so hard.
(And I do think Gary Marcus is right that the neurosymbolic approaches has been neglected by the big LLM companies because they are narrower and you can't "guarantee" success just by dumping a lot of compute on them, you need actual domain expertise to do the symbolic half.)
I can imagine it clear... a chart showing minimum feature size decreasing over time (using cherry picked data points) with a dotted line projection of when 3d printers would get down nanotech scale. 3d printer related companies would warn of dangers of future nanotech and ask for legislation regulating it (with the language of the legislation completely failing to effect current 3d printing technology). Everyone would be buying 3d printers at home, and lots of shitty startups would be selling crappy 3d printed junk.
Yeah, that metaphor fits my feeling. And to extend the metaphor, I thought Gary Marcus was, if not a member of the village, at least an ally, but he doesn't seem to actually realize the battle lines. Like maybe to him hating on LLMs is just another way of pushing symbolic AI?
Those opening Peter Thiel quotes... Thiel uses talks about (in a kind of dated and maybe a bit offensive) trans people, to draw the comparison to transhumanists wanting to change themselves more extensively. The disgusting irony is that Thiel has empowered the right-wing ecosystem, which is deeply opposed to trans rights.
So recently (two weeks ago), I noticed Gary Marcus made a lesswrong account to directly engage with the rationalists. I noted it in a previous stubsack thread
Predicting in advance: Gary Marcus will be dragged down by lesswrong, not lesswrong dragged up towards sanity. He’ll start to use lesswrong lingo and terminology and using P(some event) based on numbers pulled out of his ass.
And sure enough, he has started talking about P(Doom). I hate being right. To be more than fair to him, he is addressing the scenario of Elon Musk or someone similar pulling off something catastrophic by placing too much trust in LLMs shoved into something critical. But he really should know better by now that using their lingo and their crit-hype terminology strengthens them.
Here’s a LW site dev whining about the study, he was in it and i think he thinks it was unfair to AI
There a complete lack of introspection. It seems like the obvious conclusion to draw from a study showing people's subjective estimates of their productivity with LLMs were the exact opposite of right would inspire him to question his subjectively felt intuitions and experience but instead he doubles down and insists the study must be wrong and surely with the latest model and best use of it it would be a big improvement.
You’re welcome.
Given their assumptions, the doomers should be thanking us for delaying AGI doom!
Ah, you see, you fail to grasp the shitlib logic that the US bombing other countries doesn't count as illegitimate violence as long as the US has some pretext and maintains some decorum about it.
They probably got fed up with a broken system giving up it's last shreds of legitimacy in favor of LLM garbage and are trying to fight back? Getting through an editor and appeasing reviewers already often requires some compromises in quality and integrity, this probably just seemed like one more.
The hidden prompt is only cheating if the reviewers fail to do their job right and outsource it to a chatbot, it does nothing to a human reviewer actually reading the paper properly. So I won't say it's right or ethical, but I'm much more sympathetic to these authors than to reviewers and editors outsourcing their job to an unreliable LLM.
Some of the comments are, uh, really telling:
The irony is completely lost on them.
The OP replies that they meant the former... the later is a better answer, Death with Dignity is kind of a big reveal of a lot of flaws with Eliezer and MIRI. To recap, Eliezer basically concluded that since he couldn't solve AI alignment, no one could, and everyone is going to die. It is like a microcosm of Eliezer's ego and approach to problem solving.
Yeah, no shit secrecy is bad for scientific inquiry and open and honest reflections on failings.
...You know, if I actually believed in the whole AGI doom scenario (and bought into Eliezer's self-hype) I would be even more pissed at him and sneer even harder at him. He basically set himself up as a critical savior to mankind, one of the only people clear sighted enough to see the real dangers and most important question... and then he totally failed to deliver. Not only that he created the very hype that would trigger the creation of the unaligned AGI he promised to prevent!