Perspectivist

joined 2 weeks ago
[–] Perspectivist@feddit.uk 1 points 8 hours ago (1 children)

Never even heard of it.

[–] Perspectivist@feddit.uk -3 points 13 hours ago

You opened with a flat dismissal, followed by a quote from Reddit that didn’t explain why horseshoe theory is wrong - it just mocked it. That’s not an argument, that’s posturing.

From there, you shifted into responding to claims I never made. I didn’t argue that AI is flawless, inevitable, or beyond criticism. I pointed out that reflexive, emotional overreactions to AI are often as irrational as the blind techno-optimism they claim to oppose. That’s the context you ignored.

You then assumed what I must believe, invited yourself to argue against that imagined position, and finished with vague accusations about me “pushing acceptance” of something people “clearly don’t want.” None of that engages with what I actually said.

[–] Perspectivist@feddit.uk 4 points 14 hours ago

I often ask ChatGPT for a second opinion, and the responses range from “not helpful” to “good point, I hadn’t thought of that.” It’s hit or miss. But just because half the time the suggestions aren’t helpful doesn’t mean it’s useless. It’s not doing the thinking for me - it’s giving me food for thought.

The problem isn’t taking into consideration what an LLM says - the problem is blindly taking it at its word.

[–] Perspectivist@feddit.uk 0 points 15 hours ago* (last edited 15 hours ago)

It doesn’t understand things the way humans do, but saying it doesn’t know anything at all isn’t quite accurate either. This thing was trained on the entire internet and your grandma’s diary. You simply don’t absorb that much data without some kind of learning taking place.

It’s not a knowledge machine, but it does have a sort of “world model” that’s emerged from its training data. It “knows” what happens when you throw a stone through a window or put your hand in boiling water. That kind of knowledge isn’t what it was explicitly designed for - it’s a byproduct of being trained on data that contains a lot of correct information.

It’s not as knowledgeable as the AI companies want you to believe - but it’s also not as dumb as the haters want you to believe either.

[–] Perspectivist@feddit.uk 2 points 15 hours ago

How is "not understanding things" preventing an LLM from bringing up a point you hadn't thought of before?

[–] Perspectivist@feddit.uk -2 points 15 hours ago (1 children)

There’s a certain irony in people reacting in an extremely predictable way - spewing hate and criticism the moment someone mentions AI - while seemingly not realizing that they’re reflexively responding to a prompt without any real thought, just like an LLM.

A tool isn’t bad just because it doesn’t do what you thought it would do. You just take that into account and adjust how you use it. Hammer isn't a scam just because it can't drive in screws.

[–] Perspectivist@feddit.uk 2 points 15 hours ago (8 children)

Anyone who has an immediate kneejerk reaction the moment someone mentions AI is no better than the people they’re criticizing. Horseshoe theory applies here too - the most vocal AI haters are just as out of touch as the people who treat everything an LLM says as gospel.

[–] Perspectivist@feddit.uk 3 points 1 day ago (1 children)

Yeah, in the motor settings - where it's defined as using an external sensor - there’s also a setting called “Speed meter signal,” which I changed from 1 to 2. I’m about 85% sure that refers to the number of magnet passes per wheel revolution.

[–] Perspectivist@feddit.uk 4 points 1 day ago (1 children)

It's not a motorbike though, that's the point. The article talks about 250 watt 25kph/15mph pedelec. The misleading title just wants you to assume the opposite.

[–] Perspectivist@feddit.uk 2 points 1 day ago (1 children)

Here's the part that covers it.

[–] Perspectivist@feddit.uk 1 points 1 day ago (1 children)

I solved the issue by adding a second wheel magnet for the speed sensor. Not sure if the issues you're having are related to mine but it's easy and cheap thing to try. You might need to inform the motor controller about the second magnet as well though or it'll think you're going twice as fast.

[–] Perspectivist@feddit.uk 3 points 1 day ago* (last edited 1 day ago) (3 children)

I figured it out!

I was suspecting that the previous owner may have messed with the motor settings with the Eggrider app so I was looking for the default values online so that I could put it back to factory values. By pure chance someone made a sidenote where they described my exact symptoms and then later followed up with a solution: add a second wheel magnet. Apparently on low speeds the time between pulses gets too long which confuses the motor controller and cuts out the power. That's why this wasn't happening on high speeds at all. Still doesn't explain how my other almost identical bike gets by with just one though.

 

Now how am I supposed to get this to my desk without either spilling it all over or burning my lips trying to slurp it here. I've been drinking coffee for at least 25 years and I still do this to myself at least 3 times a week.

146
submitted 5 days ago* (last edited 5 days ago) by Perspectivist@feddit.uk to c/til@lemmy.world
 

A kludge or kluge is a workaround or makeshift solution that is clumsy, inelegant, inefficient, difficult to extend, and hard to maintain. Its only benefit is that it rapidly solves an important problem using available resources.

 

I’m having a really odd issue with my e‑fatbike (Bafang M400 mid‑drive). When I’m on the two largest cassette cogs (lowest gears), the motor briefly cuts power ~~once per crank revolution~~ when the wheel magnet passes the speed sensor. It’s a clean on‑off “tick,” almost like the system thinks I stopped pedaling for a split second.

I first noticed this after switching from a 38T front chainring to a 30T. At that point it only happened on the largest cog, never on the others.

I figured it might be caused by the undersized chainring, so I put the original back in and swapped the original 1x10 drivetrain for a 1x11 and went from a 36T largest cog to a 51T. But no - the issue still persists. Now it happens on the largest two cogs. Whether I’m soft‑pedaling or pedaling hard against the brakes doesn’t seem to make any difference. It still “ticks” once per revolution.

I’m out of ideas at this point. Torque sensor, maybe? I have another identical bike with a 1x12 drivetrain and an 11–50T cassette, and it doesn’t do this, so I doubt it’s a compatibility issue. Must be something sensor‑related? With the assist turned off everything runs perfectly, so it’s not mechanical.

EDIT: Upon further inspection it seem that the moment the power cuts out seems to perfectly sync with the wheel speed magnet going past the sensor on the chainstay so I'm like 95% sure that a faulty wheel speed sensor is the issue here. I have a spare part ordered so I'm not sure yet but unless there's a 2nd update to this then it solved the issue.

EDIT2: I figured it out. It wasn't the wheel sensor but related to it: I added a second spoke magnet for that sensor on the opposite side of the wheel and the problem went away. Apparently on low speeds the time between pulses got too long and the power to the motor was cut. In addition to this I also used my Eggrider app to tweak the motor settings so that it knows there's two magnets and not just one. The setting I tweaked is under "Bafang basic settings" and I changed the "Speed meter signal" from 1 to 2 to tell it that there's two magnets.

 

Olisi hyödyllistä tietoa seuraavia vaaleja ajatellen.

Ihmetyttää kyllä myös miten vähän tästä on Yle ainakaan mitään uutisoinut. Tuntuu melkein tarkoitukselliselta salamyhkäisyydeltä.

 

I figured I’d give this chisel knife a try, since it’s not like I use this particular knife for its intended purpose anyway but rather as a general purpose sharpish piece of steel. I’m already carrying a folding knife and a Leatherman, so I don’t need a third knife with a pointy tip.

 

I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.

Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of artificial intelligence - but drastically different.

That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).

One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.

What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.

 

I was delivering an order for a customer and saw some guy messing with the bikes on a bike rack using a screwdriver. Then another guy showed up, so the first one stopped, slipped the screwdriver into his pocket, and started smoking a cigarette like nothing was going on. I was debating whether to report it or not - but then I noticed his jacket said "Russia" in big letters on the back, and that settled it for me.

That was only the second time in my life I’ve called the emergency number.

view more: next ›