ebu

joined 8 months ago
[–] ebu@awful.systems 19 points 6 months ago* (last edited 6 months ago)

as previously discussed, the rabbit r1 turns out to be (gasp) just an android app.

in a twist no one saw coming, the servers running "rabbit os" report to just be running Ubuntu, and the "large action model" that was supposed to be able to watch humans use interfaces and learn how to use them, turns out to just be a series of hardcoded places to click in Playwright.

[–] ebu@awful.systems 10 points 7 months ago (1 children)

Ultimately, LLMs don’t use words,

LLM responses are basically paths through the token space, they may or may not overuse certain words, but they’ll have a bias towards using certain words together

so they use words but they don't. okay

this is about as convincing a point as "humans don't use words, they use letters!" it's not saying anything, just adding noise

So I don’t think this is impossible… Humans struggle to grasp these kinds of hidden relationships (consciously at least), but neural networks are good at that kind of thing

i can't tell what the "this" is that you think is possible

part of the problem is that a lot of those "hidden relationships" are also noise. knowing that "running" is typically an activity involving your legs doesn't help one parse the sentence "he's running his mouth", and part of participating in communication is being able to throw out these spurious and useless connections when reading and writing, something the machine consistently fails to do.

It’s incredibly useful to generate all sorts of content when paired with a skilled human

so is a rock

It can handle the tedious details while a skilled human drives it and validates the output

validation is the hard step, actually. writing articles is actually really easy if you don't care about the legibility, truthiness, or quality of the output. i've tried to "co-write" short-format fiction with large language models for fun and it always devolved into me deleting large chunks -- or even the whole -- output of the machine and rewriting it by hand. i was more "productive" with a blank notepad.exe. i've not tried it for documentation or persuasive writing but i'm pretty sure it would be a similar situation there, if not even more so, because in nonfiction writing i actually have to conform to reality.

this argument always baffles me whenever it comes up. as if writing is 5% coming up with ideas and then the other 95% is boring, tedium, pen-in-hand (or fingers-on-keyboard) execution. i've yet to meet a writer who believes this -- all the writing i've ever done required more-or-less constant editorial decisions from the macro scale of format and structure down to individual choices. have i sufficiently introduced this concept? do i like the way this sentence flows, or does it need to go earlier in the paragraph? how does this tie with the feeling i'm trying to convey or the argument i'm trying to put forward?

writing is, as a skill, that editorial process (at least to one degree or another). sure, i can defer all the choice to the machine and get the statistically-most-expected, confusing, factually dubious, aimless, unchallenging, and uncompelling text out of it. but if i want anything more than that (and i suspect most writers do), then i am doing 100% of that work myself.

[–] ebu@awful.systems 6 points 7 months ago

at least if it was "vectors in a high-dimensional space" it would be like. at least a little bit accurate to the internals of llm's. (still an entirely irrelevant implementation detail that adds noise to the conversation, but accurate.)

[–] ebu@awful.systems 24 points 7 months ago (26 children)

correlation? between the rise in popularity of tools that exclusively generates bullshit en masse and the huge swelling in volume of bullshit on the Internet? it's more likely than you think

it is a little funny to me that they're taking about using AI to detect AI garbage as a mechanism of preventing the sort of model/data collapse that happens when data sets start to become poisoned with AI content. because it seems reasonable to me that if you start feeding your spam-or-real classification data back into the spam-detection model, you'd wind up with exactly the same degredations of classification and your model might start calling every article that has a sentence starting with "Certainly," a machine-generated one. maybe they're careful to only use human-curated sets of real and spam content, maybe not

it's also funny how nakedly straightforward the business proposition for SEO spamming is, compared to literally any other use case for "AI". you pay $X to use this tool, you generate Y articles which reach the top of Google results, you generate $(X+P) in click revenue and you do it again. meanwhile "real" business are trying to gauge exactly what single digit percent of bullshit they can afford to get away with putting in their support systems or codebases while trying to avoid situations like being forced to give refunds to customers under a policy your chatbot hallucinated (archive.org link) or having to issue an apology for generating racially diverse Nazis (archive).

[–] ebu@awful.systems 20 points 7 months ago* (last edited 7 months ago) (1 children)

actually, i don't think possessing the ability to send email entitles you to """debate""" with anyone who publishes material disagreeing with you or the way your company runs, and i'm pretty sure responding with a (polite) "fuck off" is a perfectly reasonable approach to the kinds of people who believe they have an inalienable right to argue with you

[–] ebu@awful.systems 21 points 7 months ago

i absolutely love the "clarification" that an email address is PII only if it's your real, primary, personal email address, and any other email address (that just so happens to be operated and used exclusively by a single person, even to the point of uniquely identifying that person by that address) is not PII

[–] ebu@awful.systems 14 points 7 months ago

i was impressed enough with kagi's by-default deranking/filtering of seo garbage that i got a year's subscription a while back. good to know that this is what that money went to. suppose i'll ride out the subscription (assuming they don't start injecting ai garbage into search before then) and then find some other alternative

switching topics, but i do find it weird how the Brave integration stuff (which i also only found out about after i got the subscription) hadn't... bothered me as much? to be exceptionally clear, fuck Brandon Eich and Brave -- the planet deserves fewer bigots, crypto grifters, and covid conspiracists -- but i can't put my finger on why Kagi paying to consume Brave's search API's just doesn't cause as much friction with me. honestly it could be the fact that when i pay for Kagi it doesn't feel like i'm bankrolling Eich and his ads-as-a-service grift, whereas the money for my subscription is definitely paying for Vlad to ~~reply-guy into bloggers' inboxes who are critical of the way Kagi operates~~ correct misunderstandings about Kagi.

[–] ebu@awful.systems 22 points 7 months ago* (last edited 7 months ago) (1 children)

Actually, that email exchange isn’t as combative as I expected.

i suppose the CEO completely barreling forward past multiple attempts to refuse conversation while NOT screaming slurs at the person they're attempting to lecture, is, in some sense, strictly better than the alternative

[–] ebu@awful.systems 6 points 8 months ago

my pet conspiracy theory is that the two streamers had installed cheats at one point in the past and compromised their systems that way. but i have no evidence to base that on, just seems more plausible to me than "a hacker discovered an RCE in EAC/Apex and used it during a tournament to install game cheats on two people and [appear to] do nothing else"

view more: ‹ prev next ›