Ultimately, LLMs don’t use words,
LLM responses are basically paths through the token space, they may or may not overuse certain words, but they’ll have a bias towards using certain words together
so they use words but they don't. okay
this is about as convincing a point as "humans don't use words, they use letters!" it's not saying anything, just adding noise
So I don’t think this is impossible… Humans struggle to grasp these kinds of hidden relationships (consciously at least), but neural networks are good at that kind of thing
i can't tell what the "this" is that you think is possible
part of the problem is that a lot of those "hidden relationships" are also noise. knowing that "running" is typically an activity involving your legs doesn't help one parse the sentence "he's running his mouth", and part of participating in communication is being able to throw out these spurious and useless connections when reading and writing, something the machine consistently fails to do.
It’s incredibly useful to generate all sorts of content when paired with a skilled human
so is a rock
It can handle the tedious details while a skilled human drives it and validates the output
validation is the hard step, actually. writing articles is actually really easy if you don't care about the legibility, truthiness, or quality of the output. i've tried to "co-write" short-format fiction with large language models for fun and it always devolved into me deleting large chunks -- or even the whole -- output of the machine and rewriting it by hand. i was more "productive" with a blank notepad.exe. i've not tried it for documentation or persuasive writing but i'm pretty sure it would be a similar situation there, if not even more so, because in nonfiction writing i actually have to conform to reality.
this argument always baffles me whenever it comes up. as if writing is 5% coming up with ideas and then the other 95% is boring, tedium, pen-in-hand (or fingers-on-keyboard) execution. i've yet to meet a writer who believes this -- all the writing i've ever done required more-or-less constant editorial decisions from the macro scale of format and structure down to individual choices. have i sufficiently introduced this concept? do i like the way this sentence flows, or does it need to go earlier in the paragraph? how does this tie with the feeling i'm trying to convey or the argument i'm trying to put forward?
writing is, as a skill, that editorial process (at least to one degree or another). sure, i can defer all the choice to the machine and get the statistically-most-expected, confusing, factually dubious, aimless, unchallenging, and uncompelling text out of it. but if i want anything more than that (and i suspect most writers do), then i am doing 100% of that work myself.
as previously discussed, the rabbit r1 turns out to be (gasp) just an android app.
in a twist no one saw coming, the servers running "rabbit os" report to just be running Ubuntu, and the "large action model" that was supposed to be able to watch humans use interfaces and learn how to use them, turns out to just be a series of hardcoded places to click in Playwright.