[-] Max_P@lemmy.max-p.me 2 points 7 hours ago

Because humans don't also take inspiration from other's work they've heard and unconsciously repeat part of other songs they've heard before, possibly decades ago. Never happens. Never. Humans don't profit from books they've read and apply to their career. Humans don't profit from watching other humans do the thing and then learn to do it themselves.

All AI does is do the same thing but at ridiculous scale and ridiculous speeds. We shouldn't hold progress because capitalism dictates that we shouldn't put people out of jobs. We need to prepare for the future where there is no jobs and AI replaced all of them.

[-] Max_P@lemmy.max-p.me 3 points 11 hours ago

Ask your admin to turn it off, or if you're the admin, turn it off.

They really went with the worst possible way to implement this in that it mangles the post to rewrite all images to the image proxy, so it's not giving you a choice. So if you want the original link you have to reprocess it to strip the proxy. It's like when they thought it was a good idea to store the data as HTML encoded, so not-web clients had to try to undo all of it and it's lossy. It should be up to the clients to add the proxy as needed and if desired. Never mangle user data for storage, always reprocess it as needed and cache it if the processing is expensive.

Now you edit a post and your links are rewritten to the proxy, and if you save it again, now you proxy to the proxy. Just like when they applied the HTML processing on save, if you edited a post and saved it again it would become double encoded.

Personally I leave it off, and let Tesseract do it instead when it renders the images. It's the right way to do it. If the user wants a fresh copy because it's a dynamic image, they can do so on demand instead of being forced into it. And it actually works retroactively compared to the Lemmy server only doing it for new posts.

[-] Max_P@lemmy.max-p.me 36 points 11 hours ago

API documentation isn't a tutorial, it's there to tell you what the arguments are, what it does and what to expect as the output and just generally, what's available.

I actually have the opposite problem as you: it infuriates me when a project's documentation is purely a bunch of examples and then you have to guess if you want to do anything out of the simple tutorial's paved path. Tell me everything that's available so I can piece together something for what I need, I don't want that info on chapter 12 of the example of building a web store. I've been coding for nearly two decades now, I'm not going to follow a shopping cart tutorial just in the off chance that's how you tell how the framework defines many to many relationships.

I believe an ideal world has both covered: you need full API documentation that's straight to the point, so experienced people know about all the options and functions available, but also a bunch of examples and a tutorial for those that are new and need to get started and generally learning how to use the library.

Your case is probably a bit atypical as PyTorch and AI stuff in general is inherently pretty complex. It likely assumes you know your calculus and linear algebra and stuff like that so that'd make the API docs extra dense.

[-] Max_P@lemmy.max-p.me 10 points 1 day ago

And also with the atomic/immutable distros, the switch is practically instant, so it's not even like it forces you to watch a spinning circle for 20 minutes when you turn off your computer. You reboot and the apps all start clean with the right library versions.

It's rare but I've seen software trash itself because the newly spawned process talks a different protocol and it can lead to either crashes or off behavior that leads to a crash eventually. Or it tries to read a file mid update. Kernel updates can make it so when you plug in a USB stick, nothing happens because the driver's gone. Firefox as you mentionned. Chromium will tolerate it mostly but it can get very weird over time.

The risk is non-zero, so when you target end users that don't want to have to troubleshoot, it's safer to just do offline updates. Especially with Flatpaks now, you get those updated online and really it's only system components you don't care to delay updates taking effect

If you're new to Linux and everyone told you you can just update and no reboot, and you run into weird Firefox glitches, it just looks bad.

[-] Max_P@lemmy.max-p.me 3 points 1 day ago

This tool is great to see when remote instances will attempt to send activity to you and how far behind you are: https://phiresky.github.io/lemmy-federation-state

There's an exponential backoff, so sometimes it can take hours before you start receiving activity again, so it's nice to know when to expect it.

[-] Max_P@lemmy.max-p.me 2 points 1 day ago

The stability of a distro usually has more to do with API and ABI stability than stability in terms of reliability. And a "stable" system can be unreliable.

That's why RHEL forks are said to be compatible bug for bug. Because you don't know if fixing the bug could have a cascading side effect for somebody's very critical system.

Arch has been nothing but reliable for me. Does it doesn't need fixing sometimes because the config format of some daemon changed, or Python or nodejs got updated and now my project doesn't build? Absolutely not. But for me usually newer versions are better even if it needs some fixing, and I like doing it piecemeal rather than all at once every couple years.

Stable distributions are well loved for servers because you don't want to update 2000 servers and now you're losing millions because your app isn't compatible with the latest Ruby version. You need to be able to reliably install and reinstall the same distro version and the same packages at the same versions over and over. I can't deal with needing a new server up urgently and then get stuck having to fix a bunch of stuff because I got a newer version of something.

I use multiple distros regularly, for different purposes. Although lately Docker has significantly reduced my need for stable distros and lean more on rolling distros as the host.

[-] Max_P@lemmy.max-p.me 1 points 1 day ago

Make Docker depend on the mount. You can simply use systemctl edit docker.service and then

[Unit]
Requires=path-to-your-smb.mount
After=path-to-your-smb.mount

Then it will guarantee it's mounted by the time Docker starts.

[-] Max_P@lemmy.max-p.me 14 points 1 day ago

Lemmy wasn't ready and still mostly not ready for a mass Reddit exodus. The Reddit API fiasco wasn't anticipated by anybody and the large influx of users exposed a ton of bugs and federation issues.

But it's not a failure, yet. I'm sure Reddit had growing pains after the Digg exodus too. Some platforms take years to become popular. Reddit was small for quite a while before it became more mainstream.

In a way to me Lemmy feels a bit like Reddit must have been a few years before I joined it 12 years ago.

The problem is the expectation that Lemmy could replace Reddit overnight, and would immediately be a 1:1 replacement.

Although personally I like it more here, and I get more interactions than Reddit. But I am a tech nerd, so.

[-] Max_P@lemmy.max-p.me 30 points 1 day ago

Stritcly speaking if you buy it and it comes with sources under the GPL then that is perfectly okay. The principle of freedom software isn't that everything is free of charge, but rather that when you obtain software you should be free to access its source and customize it for your needs and share those modifications with other people.

That does make it hard for people to really have to pay for it, but it's not like people don't pirate proprietary software anyway. The presumption is if you're honest and a good person you will pay the other for the software that you like and want to keep using.

It's also not violating the GPL by having proprietary apps alongside GPL ones bundled together. SteamOS for example, comes with Steam and other proprietary Valve stuff.

But I would definitely expect it to not be popular and for most of the open-source and Linux communities to want nothing of it (paying for a programming language, what is this, 1995 when we pay for Delphi?).

[-] Max_P@lemmy.max-p.me 3 points 1 day ago

Ah it's a laptop, I thought it was a desktop motherboard. That is strange, on a laptop I wouldn't expect people to have to mess with the BIOS at all to make VR work, that's usually a desktop thing to make sure rebar is enabled and stuff.

[-] Max_P@lemmy.max-p.me 8 points 1 day ago

They most likely sent you a new board which happens to have an older BIOS on it. I don't think they try to upgrade them at all, they pick a boxed new board from the warehouse and ship it to you. You can probably just upgrade it again, there's no way this one's newer. Also I guess double-check you got the same model of board back, that could also explain the old BIOS.

RMA'd an MSI board for which they released a BIOS update specifically for the bug I encountered which can get the system completely unbootable even with a CMOS reset, and it didn't even come with the updated BIOS either. I imagine they expect it'll eventually get updated through Windows.

[-] Max_P@lemmy.max-p.me 18 points 2 days ago

So what's stopping the workers from saying no? If they have labor shortages then the job market should be favorable to the workers as you gotta be the most attractive employer, which would be those that don't abuse that law and overwork their employees. It's not like they can force people to work.

Or just go anywhere else in the EU.

179
submitted 2 weeks ago* (last edited 2 weeks ago) by Max_P@lemmy.max-p.me to c/linux@lemmy.ml

Neat little thing I just noticed, might be known but I never head of it before: apparently, a Wayland window can vsync to at least 3 monitors with different refresh rates at the same time.

I have 3 monitors, at 60 Hz, 144 Hz, and 60 Hz from left to right. I was using glxgears to test something, and noticed when I put the window between the monitors, it'll sync to a weird refresh rate of about 193 fps. I stretched it to span all 3 monitors, and it locked at about 243 fps. It seems to oscillate between 242.5 and 243.5 gradually back and forth. So apparently, it's mixing the vsync signals together and ensuring every monitor's got a fresh frame while sharing frames when the vsyncs line up.

I knew Wayland was big on "every frame is perfect", but I didn't expect that to work even across 3 monitors at once! We've come a long, long way in the graphics stack. I expected it to sync to the 144Hz monitor and just tear or hiccup on the other ones.

3

All the protections in software, what an amazing idea!

16

It only shows "view all comments", so you can't see the full context of the comment tree.

9
submitted 9 months ago* (last edited 9 months ago) by Max_P@lemmy.max-p.me to c/boostforlemmy@lemmy.world

The current behaviour is correct, as the remote instance is the canonical source, but being able to copy/share a link to your home instance would be nice as well.

Use case: maybe the comment is coming from an instance that is down, or one that you don't necessarily want to link to.

If the user has more than one account, being able to select which would be nice as well, so maybe a submenu or per account or a global setting.

view more: next ›

Max_P

joined 1 year ago