backgroundcow

joined 2 years ago
[–] backgroundcow@lemmy.world 11 points 2 days ago* (last edited 2 days ago)

I don't get this. Why are so many countries willing to play Trump's game? It seems a horrible long-term strategy to allow one country to hold global trade hostage this way. Shouldn't we negotiate between ourselves, i.e., between the affected countries?

The idea should be: for us, exports of X, Y, and Z are taking a hit, and for you A, B, and C. So, let's lower our tariffs in these respective areas to soften the blow to the affected industries. That way, we would partly make up for, say, lost exports to the US for cars, at the cost of additional competition on the domestic market for, say, soy beans; and vise-versa; evening out the effects as best we can.

With such agreements in place, we can return to Trump from a stronger position and say: we are willing to negotiate, but not under threat. We will do nothing until US tariffs are back to the levels before this started. But, at that point, we will be happy to discuss the issues you appear to see with trade inbalances and tariffs, so that we can find a mutual beneficial agreement going forward.

Something like this would send a message that would do far more good towards trade stability for the future.

[–] backgroundcow@lemmy.world 1 points 4 days ago

No shade on people trying to make sustainable choices, but if the solution to the climate crisis is us trusting everyone to "get with the program" and pick the right choice; while unsustainable alternatives sit right there beside them at lower prices, then we are truly doomed.

What the companies behind these foods and products don't want to talk about is that to get anywhere we have to target them. It shouldn't be a controversial standpoint that: (i) all products need to cover their true full environmental and sustainability costs, with the money going back into investments into the environment counteracting the negative impacts; (ii) we need to regulate, regulate, and regulate how companies are allowed to interact with the environment and society, and these limits must apply world-wide. There needs to be careful follow-up on that these rules are followed: with consequences for individuals that take the decisions to break them AND "death sentences" (i.e. complete disbandment) for whole companies that repeatedly oversteps.

[–] backgroundcow@lemmy.world 10 points 2 weeks ago (3 children)

What we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data

Prove to me that this isn't exactly how the human mind -- i.e., "real intelligence" -- works.

The challenge with asserting how "real" the intelligence-mimicking behavior of LLMs is, is not to convince us that it "just" is the result of cold deterministic statistical algoritms running on silicon. This we know, because we created them that way.

The real challenge is to convince ourselves that the wetware electrochemical neural unit embedded in our skulls, which evolved through a fairly straightforward process of natural selection to improve our odds at surviving, isn't relying on statistical models whose inner principles of working are, essentially, the same.

All these claims that human creativity is so outstanding that it "obviously" will never be recreated by deterministic statistical models that "only" interpolates into new contexts knowledge picked up from observation of human knowledge: I just don't see it.

What human invention, art, idé, was so truly, undeniably, completely new that it cannot have sprung out of something coming before it? Even the bloody theory of general relativity--held as one of the pinnacles of human intelligence--has clear connections to what came before. If you read Einstein's works he is actually very good at explaining how he worked it out in increments from models and ideas - "what happens with a meter stick in space", etc.: i.e., he was very good at using the tools we have to systematically bring our understanding from one domain into another.

To me, the argument in the linked article reads a bit as "LLM AI cannot be 'intelligence' because when I introspect I don't feel like a statistical machine". This seems about as sophisticated as the "I ain't no monkey!" counter- argument against evolution.

All this is NOT to say that we know that LLM AI = human intelligence. It is a genuinely fascinating scientific question. I just don't think we have anything to gain from the "I ain't no statistical machine" line of argument.

[–] backgroundcow@lemmy.world 9 points 2 weeks ago

That's perfect. You already know your lines!

[–] backgroundcow@lemmy.world 2 points 4 weeks ago

After having a lot of sysvinit experience, the transition to setting up my own systemd services has been brutal. What finally clicked for me was that I had this habit of building mini-services based on shellscripts; and systemd goes out of its way to deliberately break those: it wants a single stable process to monitor; and if it sniffs out that you are doing some sketchy things that forks in ways it disapproves of, it is going to shut the whole thing down.

[–] backgroundcow@lemmy.world 4 points 1 month ago

It is fully possible, quite likely even, for models to both be "more accurate than humans" on average while at the same time suffer occasional "accuracy collapses".

[–] backgroundcow@lemmy.world 2 points 1 month ago

Crush both apples with the blunt side of the knife. Divide applesauce equally.

[–] backgroundcow@lemmy.world 3 points 2 months ago* (last edited 2 months ago)

"Absolutely, rest up" is more than sufficient in 99percent of cases

Internal monologue: "But wait, will it come off as impolite if my reply is this short? I better add something about how I'm sad to hear that they are sick. And maybe also something that I hope they will get better soon. Hmm... how do I say that without sounding like I expect them to be better soon-- that they can and should feel allowed to recover at their own pace? But, now it sounds as we don't need them at work-- I also want them to feel missed. Also, is there a risk they take 'rest up' wrong?, as if it is their fault they are sick because they haven't rested enough?-- I'd better soften up that formulation. Then, how do I start this email? 'Dear x,' seems too formal, maybe 'Hey,' -- no, that sounds like 'Hey listen up!'; maybe I'll just skip the greeting to make it feel more like a casual conversation. Do I still sign the email? With "Regards?", "Best regards?", "Sincerely?", "With wishes of swift recovery?" Should I also cut the email footer to make it seem less formal? What if they need to forward this to show that they have my permission? In that case the formal footer is probably useful.... etc. etc.

[–] backgroundcow@lemmy.world 1 points 2 months ago

I very much understand wanting to have a say against our data being freely harvested for AI training. But this article's call for a general opt-out of interacting with AI seems a bit regressive. Many aspects of this and other discussions about the "AI revolution" remind me about the Mitchell and Web skit on the start of the bronze age: https://youtu.be/nyu4u3VZYaQ

[–] backgroundcow@lemmy.world 25 points 2 months ago

John Oliver had a segment on this that may help convince people that it is real: https://youtu.be/3kEpZWGgJks

view more: next ›