Rivalarrival

joined 1 year ago
[–] Rivalarrival 4 points 2 months ago

Yes, dangers exist from third party repairs.

Refusal or even simple failure to provide critical repair data to the end user or their agent denies the end user the ability to make an informed decision about repairs.

The company should be liable for all damages from a botched 3rd-party repair unless they provide to the end user complete specifications and unrestricted access to the device in order to make informed decisions about repairs.

[–] Rivalarrival 3 points 2 months ago* (last edited 2 months ago)

Proprietary information and corporate classified information do not exist once they are incorporated into the device and sold to the end user. That information now belongs to the end user, who will continue to need it even if the company is out of business, or refuses service to the owner of the device.

Any attempt to conceal that information from the end user should make the company liable for any failed repair performed by any individual, including harm arising from that failed repair. The only way to avoid that liability is to release all information to the end user, so they are fully informed when making a repair decision.

[–] Rivalarrival 7 points 2 months ago

I've read dozens upon dozens of claims that social media was harmful, and always dismissed them as Karen nonsense, "think of the children" bullshit, or "kids these days" boomer ignorance.

Today, for the first time, I have started to believe there might actually be some truth to the idea.

[–] Rivalarrival 11 points 2 months ago

I need this exact pin. You can save some money and make it without any moving parts.

[–] Rivalarrival 11 points 2 months ago* (last edited 2 months ago)

Cox Communications asked a court to block Rhode Island's plan for distributing $108.7 million in federal funding for broadband deployment.

Cox Communications should be fined $108.7 million for vexatious litigation, and be prohibited from providing any pay or compensation to its C-suite until that fine is paid in full.

Copies of that order should be sent by certified mail to every corporate officer and board member of Comcast, Charter, and Spectrum.

[–] Rivalarrival 1 points 2 months ago

The "collapse" you're talking about is a reduction in the diversity of the output, which is exactly what we should expect when we impart a bias toward obviously correct answers, and away from obviously incorrect answers.

Further, that criticism is based on closed-loop feedback, where the LLM is training itself only on it's own outputs.

I'm talking about open-loop, where it is also evaluating the responses from the other party.

Further, the studies whence such criticism comes are based primarily on image generation AIs, not LLMs. Image generation is highly subjective; there is no definitively "right" or "wrong" output, just whether it appeals to the specific observer. An image generator would need to tailor itself to that specific observer.

LLM sessions deal with far more objective content.

A functional definition of insanity is doing the same thing over and over and expecting different results. The inability to consider it's previous interactions denies it the ability to learn from it's previous behavior. The idea that AIs must not be allowed to train on their own data is functionally insane.

[–] Rivalarrival 1 points 2 months ago* (last edited 2 months ago) (2 children)

Also, with llms there is no "next time" it's a completely static model.

It's only a completely static model if it is not allowed to use it's own interactions as training data. If it is allowed to use the data acquired from those interactions, it stops being a static model.

Kids do learn elementary arithmetic by rote memorization. Number theory doesn't actually develop significantly until somewhere around 3rd to 5th grade, and even then, we don't place a lot of value on it at that time. We are taught to memorize the multiplication table, for example, because the efficiency of simply knowing that table is far more computationally valuable than the ability to reproduce it at any given time. That rote memorization is mimicry: the child is simply spitting out a previously learned response.

Remember: LLMs are currently toddlers. They are toddlers with excellent grammar, but they are toddlers.

Remember also that simple mimicry is an incredibly powerful problem solving method.

[–] Rivalarrival 2 points 2 months ago (1 children)

Not going to link it, but in the video I saw, there were two distinct holes behind the hole he was using. So, either he was using a urethra, or she had a second vagina.

[–] Rivalarrival -2 points 2 months ago (4 children)

I can see why you would think that, but to see how it actually goes with a human, look at the interaction between a parent and child, or a teacher and student.

"Johnny, what's 2+2?"

"5?"

"No, Johnny, try again."

"Oh, it's 4."

Turning Johnny into an LLM,nThe next time someone asks, he might not remember 4, but he does remember that "5" consistently gets him a "that's wrong" response. So does "3".

But the only way he knows 5 and 3 gets a negative reaction is by training on his own data, learning from his own mistakes.

He becomes a better and better mimic, which gets him up to about a 5th grade level of intelligence instead of a toddler.

[–] Rivalarrival -4 points 2 months ago* (last edited 2 months ago) (9 children)

It needs to be retrained on the responses it receives from it's conversation partner. It's previous output provides context for it's partner's responses.

It recognizes when it is told that it is wrong. It is fed data that certain outputs often invite "you're wrong" feedback from it's partners, and it is instructed to minimize such feedback.

It is not (yet) developing true intelligence. It is simply learning to bias it's responses in such a way that it's audience doesn't immediately call it a liar.

view more: ‹ prev next ›