scruiser

joined 2 years ago
[–] scruiser@awful.systems 3 points 1 week ago

Wow that blows past dunning-kurger overestimation into straight up time cube tier crank.

[–] scruiser@awful.systems 12 points 1 week ago (2 children)

The space of possible evolved biological minds is far smaller than the space of possible ASI minds

Achkshually, Yudkowskian Orthodoxy says any truly super-intelligent minds will converge on Expected Value Maximization, Instrumental Goals, and Timeless-Decision Theory (as invented by Eliezer), so clearly the ASI mind space is actually quite narrow.

[–] scruiser@awful.systems 15 points 1 week ago

Actually, as some of the main opponents of the would-be AGI creators, us sneerers are vital to the simulation's integrity.

Also, since the simulator will probably cut us all off once they've seen the ASI get started, by delaying and slowing down rationalists' quest to create AGI and ASI, we are prolonging the survival of the human race. Thus we are the most altruistic and morally best humans in the world!

[–] scruiser@awful.systems 8 points 1 week ago* (last edited 1 week ago)

Yeah, the commitment might be only a token amount of money as a deposit or maybe even less than that. A sufficiently reliable and cost effective (which will include fuel costs and maintenance cost) supersonic passenger plane doesn't seem impossible in principle? Maybe cryptocurrency, NFTs, LLMs, and other crap like Theranos have given me low standards on startups: at the very least, Boom is attempting to make something that is in principle possible (for within an OOM of their requested funding) and not useless or criminal in the case that it actually works and would solve a real (if niche) need. I wouldn't be that surprised if they eventually produce a passenger plane... a decade from now, well over the originally planned budget target, that is too costly to fuel and maintain for all but the most niche clientele.

[–] scruiser@awful.systems 7 points 1 week ago* (last edited 1 week ago) (3 children)

I just now heard about here. Reading about it on Wikipedia... they had a mathematical model that said their design shouldn't generate a sonic boom audible from ground level, but it was possible their mathematical model wasn't completely correct, so building a 1/3 scale prototype (apparently) validated their model? It's possible their model won't be right about their prospective design, but if it was right about the 1/3 scale then that is good evidence their model will be right? idk, ~~I'm not seeing much that is sneerable here~~, it seems kind of neat. Surely they wouldn't spend the money on the 1/3 scale prototype unless they actually needed the data (as opposed to it being a marketing ploy or worse yet a ploy for more VC funds)... surely they wouldn't?

iirc about the Concorde (one of only two supersonic passenger planes), it isn't so much that supersonic passenger planes aren't technologically viable, its more a question of economics (with some additional issues with noise pollution and other environmental issues). Limits on their flight path because of the sonic booms was one of the problems with the Concorde, so at least they won't have that problem. And as to the other questions... Boom Supersonic's webpage directly addresses these questions, but not in any detail, but at least they address them...

Looking for some more skeptical sources... this website seems interesting: https://www.construction-physics.com/p/will-boom-successfully-build-a-supersonic . They point out some big problems with Boom's approach. Boom is designing both its own engine and it's own plane, and the costs are likely to run into the limits of their VC funding even assuming nothing goes wrong. And even if they get a working plane and engine, the safety, cost, and reliability needed for a viable supersonic passenger plane might not be met. And... XB-1 didn't actually reach Mach 2.2 and was retired after only a few flight. Maybe it was a desperate ploy for more VC funding? Or maybe it had some unannounced issues? Okay... I'm seeing why this is potentially sneerable. There is a decent chance they entirely fail to deliver a plane with the VC funding they have, and even if they get that far it is likely to fail as a commercially viable passenger plane. Still, there is some possibility they deliver something... so eh, wait and see?

[–] scruiser@awful.systems 12 points 1 week ago

As the other comments have pointed out, an automated search for this category of bugs (done without LLMs) would do the same job much faster, with much less computational resources, without any bullshit or hallucinations in the way. The LLM isn't actually a value add compared to existing tools.

[–] scruiser@awful.systems 43 points 1 week ago (2 children)

Of course, part of that wiring will be figuring out how to deal with the the signal to noise ratio of ~1:50 in this case, but that’s something we are already making progress at.

This line annoys me... LLMs excel at making signal-shaped noise, so separating out an absurd number of false positives (and investigating false negatives further) is very difficult. It probably requires that you have some sort of actually reliable verifier, and if you have that, why bother with LLMs in the first place instead of just using that verifier directly?

[–] scruiser@awful.systems 4 points 1 week ago

He hasn't missed an opportunity to ominously play up genAI capabilities (I remember him doing so as far back as AI dungeon), so it will be a real break for him to finally admit how garbage their output is.

[–] scruiser@awful.systems 6 points 1 week ago

Loose Mission Impossible Spoilers

The latest Mission Impossible movie features a rogue AI as one of the main antagonists. But on the other hand, the AI's main powers are lies, fake news, and manipulation, and it only gets as far as it does because people allow fear to make themselves manipulable and it relies on human agents to do a lot of its work. So in terms of promoting the doomerism narrative, I think the movie could actually be taken as opposing the conventional doomer narrative in favor of a calm, moderate, internationally coordinated (the entire plot could have been derailed by governments agreeing on mutual nuclear disarmament before the AI subverted them) response against AI's that ultimately have only moderate power.

Adding to the post-LLM hype predictions: I think post LLM bubble popping, "Terminator" style rogue AI movie plots don't go away, but take on a different spin. Rogue AI's strength's are going to be narrower, their weaknesses are going to get more comical and absurd, and idiotic human actions are going to be more of a factor. For weaknesses it will be less "failed to comprehend love" or "cleverly constructed logic bomb breaks its reasoning" and more "forgets what it was doing after getting drawn into too long of a conversation". For human actions it will be less "its makers failed to anticipate a completely unprecedented sequence of bootstrapping and self improvement" and more "its makers disabled every safety and granted it every resource it asked for in the process of trying to make an extra dollar a little bit faster".

[–] scruiser@awful.systems 12 points 1 week ago* (last edited 1 week ago) (2 children)

He's set up a community primed to think the scientific establishment's focus on falsifiablility and peer review is fundamentally worse than "Bayesian" methods, and that you don't need credentials or even conventional education or experience to have revolutionary good ideas, and strengthened the already existing myth of lone genii pushing science forward (as opposed to systematic progress). Attracting cranks was an inevitable outcome. In fact, Eliezer occasionally praises cranks when he isn't able to grasp their sheer crankiness (for instance, GeneSmith's ideas are total nonsense for anyone with more familiarity with genetics than skimming relevant-sounding scientific publications and garbage pop-sci journalism, but Eliezer commented favorably). The only thing that has changed is ChatGPT and it's clones glazing cranks first making them even more deluded. And of course, someone (cough Eliezer) was hyping up ChatGPT as far back as GPT-2, so it's only to be expected that cranks would think LLMs were capable of providing legitimate useful feedback.

Not a fan of yud but getting daily emails from delulus would drive me to wish for the basilisk

He's deliberately cultivated an audience willing to hear cranks out, so this is exactly what he deserves.

[–] scruiser@awful.systems 12 points 2 weeks ago

This connection hadn't occured to me before, but the Starship Troopers scenes (in the book) where they claim to have mathematically rigorous proofs about various moral statements or actions or societal constructs reminds me of how Eliezer has a decision theory in mind with all sorts of counter intuitive claims (it's mathematically valid to never ever give into any blackmail or threats or anything adjacent to them), but hasn't actually written out his decision theory in rigorous well defined terms that can pass peer review or be used to figure out anything beyond some pre-selected toy problems.

[–] scruiser@awful.systems 10 points 2 weeks ago

There are parts of the field that have major problems, like the sorts of studies that get done on 20 student volunteers and then get turned into a pop psychology factoid that gets tossed around and over-generalized while the original study fails to replicate, but there are parts that are actually good science.

view more: ‹ prev next ›