Once again not much new. On the regulation as well as capability front, things keep grinding along.
Last week there was a claim that Pi AI cannot be jailbroken. This week, a Twitter user has it giving steps to manufacture heroin and C4. So it goes.
Most interesting progress for me is the paper that notes that "grokking" is caused by the network picking up two separate circuits: one for memorization and one for generalization. But there's no inherent preference for generalization, it's just a blessing of scale: retrain the grokked network on a too-small dataset and it forgets its generalization.
It just sounds like the creator made a thing that wasn't what people wanted.
It just feels like the question to ask then isn't "but how do I get them to choose the thing despite it not being what they want?"
"Hard work goes to waste when you make a thing that people don't want" is ... true. But I would say it's a stretch to call it a "problem". It's just an unescapable reality. It's almost tautological.
Look at houses. You made a village with a diverse bunch of houses. But more than half of those, nobody wants to live in. Then "how do I get people to live in my houses?" "Build houses that people actually want to live in." Like, you can pay people money to live in your weird houses, sure, I just feel like you have missed the point of being an architect somewhat.