yak

joined 1 year ago
[–] yak@lmy.brx.io 25 points 3 weeks ago

And will be cancelled in 18 months with 2 weeks notice.

[–] yak@lmy.brx.io 2 points 1 month ago

They don't have Mozartists.

[–] yak@lmy.brx.io 2 points 1 month ago

This approach sounds good.

I think the correct approach is both, if you have the option.

Most devices accept two name servers. Redundancy is always good, especially for DNS.

[–] yak@lmy.brx.io 1 points 2 months ago

I've used this list generating package for years now with great results: https://github.com/opencoff/unbound-adblock/tree/master

It is designed to generate blocking lists that can be used with unbound, the DNS resolver. There are even instructions for how to configure unbound so if you are new to it all you can follow along.

I use the resulting lists in my two local DNS name servers, running unbound.

The way it works is that if a query for a blocked address comes in to one of thenlocal DNS servers it returns a domain not found result. If the address is not on the block list then it forwards the query on to an internet DNS resolver securely using DoT.

You can gain further control over your DNS results by choosing those upstream resolvers carefully. Quad9 and Cloudflare etc all offer DoT resolving, along with some further filtering (eg. for malware), or completely unfiltered DNS if that's what you want.

Services like cleanbrowsing.org offer more fine grained filtering, useful if you want a family-friendly set of DNS results, based off categorify.org. You can pay for really fine tuned results, or there is a free layer which provides still very useful basic categories.

Combining the two forms of filtering, local advert and tracking blocking, along with open internet content categorisation, seems to be very effective.

I get complaints about too many adverts when my kids are on WiFi away from home. I take it as a compliment.

[–] yak@lmy.brx.io 1 points 2 months ago

Edit: Forgot to mention! Another minor gripe I have is that my current 1 router / 2 routers-as-AP solution isn’t meshed, so my devices have to be aware of all 3 networks as I walk across my property. It’s a pain that I know can be solved with buying dedicated access points (…right?), but I’d like to know other’s experiences with this, either with OpenWRT, or other network solutions!

This works very well with OpenWRT on each AP and/or router device by using the same ESSID and password combo on each of them, enabling WLAN roaming and also 802.11r Fast Transition to allow your mobile devices to hand-off quickly from one AP to another as signal strength levels demand. With this enabled you keep the same IP address, and even SSH sessions don't drop when you move from one AP to another, it all happens in the background. As far as the end-user is concerned it is all just one big happy wifi network.

802.11r is not mesh, that's a separate thing but and you can do it with OpenWRT too. I don't need to because I have ethernet to all my APs, so all the RF bandwidth is available for the last leg from AP to device(s), and not being used by back-haul from AP to AP through to the router as well.

In your use case I would consider grouping devices into categories and having a different wifi network for each category with the dhcp and firewall rules set accordingly.

VLANs on the ethernet-side might also be useful, but it sounds like most of your devices are on WiFi, so it might well be possible to get a "mature" setup without needing that extra complexity.

As others have said, backing these settings up and restoring them to a new device in the case of hardware failure is generally straightforward. Care is needed when replacing the broken device with a new one because of naming conventions varying from device to device, but the network logic, and things like dhcp reservations can be carried over.

[–] yak@lmy.brx.io 3 points 3 months ago (1 children)
[–] yak@lmy.brx.io 12 points 4 months ago (2 children)

If you weren't at a university it was generally a challenge to get hold of disks. Downloading at home took forever on a 28.8 or even 56k modem (ie. 56 kilobits per second).

Slackware and Redhat disk sets were the thing, in my experience. But generally that only gave you the compiled code, not the source (although there was an another set of disks with the source packages).

If you wanted to recompile stuff you had to download the right set of packages, and be prepared to handle version conflicts on your own (with mailing list and usenet support).

Recompiling the kernel with specific patches for graphics cards, sound cards, modems and other devices (I remember scanners in particular), or specific combinations of hardware was relatively common. "Use the source, Luke!" was a common admonition. Often times specific FAQ pages or howtos would be made available for software packages, including games.

XFree86 was very powerful on hardware it supported, but was very finnicky. See the other posts about the level of detail that had to be supplied to get combinations of graphics cards and monitors working without the appearance of magic smoke.

Running Linux was mostly a enthusiast/hobbyist/geek thing, for those who wanted to see what was possible, and those who wanted to tinker with something approaching Unix, and those who wanted to stretch the limits of what their hardware could do.

Many of those enthusiasts and hobbyists and geeks discovered that Linux could do far more than anyone previously had been prepared to admit or realise. They, and others like them, took it with them into progressively more significant, and valuable projects, and it began to take over the world.

[–] yak@lmy.brx.io 4 points 4 months ago

SSH along with the extra stuff it comes with like scp is the way forward.

The two following suggestions make use of secure shell.

Termux and then pkg install mc (MC is Midnight Commander)

Alternatively, if you are feeling brave and GUI, Total Commander here.

[–] yak@lmy.brx.io 12 points 4 months ago

Consider using tar to create an archive of your home directory, and then unpacking that on the new machine. This will help to capture all the links as well as regular files, and their permissions.

Take a minute to think what else you have changed on the old machine, and then take another minute to think how tricky it would be to replicate on a new machine. Downloading the apps again is gloriously easy. Replacing configs, or keys and certificates, is not!

I normally archive /etc as well, and then I can copy out the specific files I need.

Did you install databases? You'll want to follow specific instructions for those.

Have you set up web sites? You'll want to archive /var/www as well.

[–] yak@lmy.brx.io 5 points 5 months ago

I have never knowingly used Arch. Am I allowed to like this song?

Also, Taylor Swift, is that you?

[–] yak@lmy.brx.io 2 points 5 months ago

Seems like the PlayStore team should get wiser to how OSS communities manage software releases. They should be good at this, because, you know, their platform is based on the Linux kernel.

PlayStore should enable Termux community to manage the app name, or at least be able to have a prominent link to an "official" alternative displayed on the PlayStore Termux page.

Obviously something has gone wrong between Fornwall and the rest of the team for this situation to arise. But at first sight this is not an uncommon or surprising situation to arise. I think PlayStore could do better, and Google could support the OSS ecosystem they benefit from better.

[–] yak@lmy.brx.io 3 points 6 months ago

Roll on to this time next year with the following headlines...

FAA: discovers burning RP1 is bad for the environment, insists SpaceX must begin deployment of methane fueled rockets.

And

FAA: does not insist on development of methane fueled large aircraft.

And

FAA investigators complain they cannot get close enough for long enough to investigate SpaceX launchpad before another rocket launches

And

Unusually high number of FAA personnel reporting hearing problems.

view more: next ›