FeepingCreature

joined 1 year ago
[–] FeepingCreature@burggit.moe 1 points 8 months ago* (last edited 8 months ago)

It just sounds like the creator made a thing that wasn't what people wanted.

It just feels like the question to ask then isn't "but how do I get them to choose the thing despite it not being what they want?"

"Hard work goes to waste when you make a thing that people don't want" is ... true. But I would say it's a stretch to call it a "problem". It's just an unescapable reality. It's almost tautological.

Look at houses. You made a village with a diverse bunch of houses. But more than half of those, nobody wants to live in. Then "how do I get people to live in my houses?" "Build houses that people actually want to live in." Like, you can pay people money to live in your weird houses, sure, I just feel like you have missed the point of being an architect somewhat.

 

And with that, I think I'm caught up again.

What a week!

 

A lot of the news coverage did feel weirdly "tactical" to me.

("Superhuman unaligned AI is making moves in OpenAI!" "Actually, it's just superhuman unaligned Sam Altman".)

 

Oops! I forgot this week's roundup. Better late than never...

 

A world with super-persuaders but not superintelligence is indeed going to be weird. All sorts of movements are persuasion-capped; if you can just get GPT-4.5 to make you a near optimal marketing strategy... imagine the crypto bubble, but on everything. The Omnibubble.

You Will Want To Buy The NFT.

 

Again not much progress on the capabilities front. My head welcomes all the incremental advances in creating consensus on safety and locking down uncontrolled frontier models. My heart yearns for new preprints.

 

Good news, we can now break a semantically overloaded neuron into a dozen separated concepts that we don't understand.

 

Not many new fundamentals this time. Lots of product news however. It's always good to see interpretability making progress.

 

Biggest thing for me is that the new 3.5 apparently plays competent chess - at a high amateur level - iff you prompt it just right. I would not have expected that, considering how Anarchy Chess ChatGPT's normal play is. Once again demonstrates that you can never prove the absence of a skill.

 

Once again not much new. On the regulation as well as capability front, things keep grinding along.

Last week there was a claim that Pi AI cannot be jailbroken. This week, a Twitter user has it giving steps to manufacture heroin and C4. So it goes.

Most interesting progress for me is the paper that notes that "grokking" is caused by the network picking up two separate circuits: one for memorization and one for generalization. But there's no inherent preference for generalization, it's just a blessing of scale: retrain the grokked network on a too-small dataset and it forgets its generalization.

view more: next ›