butitsnotme

joined 1 year ago
[–] butitsnotme@lemmy.world 2 points 2 weeks ago

I haven’t looked into this, but in step 3 can you call the editor’s undo function?

[–] butitsnotme@lemmy.world 2 points 1 month ago* (last edited 1 month ago) (1 children)
[–] butitsnotme@lemmy.world 2 points 2 months ago

What about simply shelling out to ripgrep?

[–] butitsnotme@lemmy.world 6 points 3 months ago (5 children)

These devices have been recommended in the past, and it looks like they can run OpenWRT

https://www.amazon.com/GL-iNet-GL-SFT1200-Secure-Travel-Router/dp/B09N72FMH5

https://openwrt.org/toh/gl.inet/start

[–] butitsnotme@lemmy.world 4 points 3 months ago* (last edited 3 months ago)

Bash and a dedicated user should work with very little effort. Basically, create a user on your VM (maybe called git), set up passwordless (and keyless) ssh for this user but force the command to be the git-shell. Next a simple bash script which iterates directories in this user’s home directory and runs git fetch —all. Set cron to run this script periodically (every hour?). To add a new repository, just ssh as your regular user and su to the git user, then clone the new repository into the home directory. To change the upstream, do the same but simply update the remote.

This could probably be packaged as a dockerfile pretty easily, if you don’t mind either needing to specify the port, or losing the machine’s port 22.

EDIT: I found this after posting, might be the easiest way to serve the repositories, in combination with the update script. There’s a bunch more info in the Git Book too, the next section covers setting up HTTP…

[–] butitsnotme@lemmy.world 5 points 3 months ago

I would probably use ntfy.sh for this purpose. It doesn’t quite meet all your requirements, but you could use a random channel name and get some amount of security…

You can self host it, or use the hosted version. (I know it’s technically not chat, but it works on a series of messages, it just happens to call them notifications.)

[–] butitsnotme@lemmy.world 2 points 3 months ago (1 children)

Yes, I have. I should probsbly test them again though, as it’s been a while, and Immich at least has had many potentially significant changes.

LVM snapshots are virtually instant, and there is no merge operation, so deleting the snapshot is also virtually instant. The way it works is by creating a new space where the difference from the main volume are written, so each time the application writes to the main volume the old block will be copied to the snapshot first. This does mean that disk performance will be somewhat lower than without snapshots, however I’ve not really noticed any practical implications. (I believe LVM typically creates my snapshots on a different physical disk from where the main volume lives though.)

You can my backup script here.

[–] butitsnotme@lemmy.world 4 points 3 months ago (3 children)

I don’t bother stopping services during backup, each service is contained to a single LVM volume, so snapshotting is exactly the same as yanking the plug. I haven’t had any issues yet, either with actual power failures or data restores.

[–] butitsnotme@lemmy.world 2 points 3 months ago

I would create separate nodes for each piece of metadata within the scene you are loading, you can use a Node2D (or Node3D) to store a position (such as a spawn point) and there is a Timer node which can hold the time for a level.

Alternatively, under the node inspector there is an option to “Add Metadata”, which might be more along the lines of what you are looking for. It can be read with the get_meta function.

[–] butitsnotme@lemmy.world 37 points 3 months ago (5 children)

I know it’s not ideal, but if you can afford it, you could rent a VPS in a cloud provider for a week or two, and do the download from Google Takeout on that, and then use sync or similar to copy the files to your own server.

[–] butitsnotme@lemmy.world 2 points 4 months ago

For no 1, that shouldn’t be dind, the container would be controlling the host docker, wouldn’t it?

If so, keep in mind that this is the same as giving root SSH access to the host machine.

As far as security goes, anything that allows GitHub to cause your server to download (pull) and use a set of arbitrary of Docker images with arbitrary configuration is remote code execution. It doesn’t really matter what you to secure access to the machine, if someone compromises your GitHub account.

I would probably set up SSH with a key dedicated to GitHub, specifically for deploying. If SSH is configured to only allow keys for access, it’s not much of a security risk to open it up to the internet. I would then configure that key to only be able to run a single command, which I would make a very simple bash script which runs git fetch, and then git verify-commit origin/main (or whatever branch you deploy), befor checking out the latest commit on that branch.

You can sign commits fairly easily using SSH keys now, which combined with the above allows you to store your data on GitHub without having to trust them to have RCE on your host.

[–] butitsnotme@lemmy.world 4 points 4 months ago

The DMA doesn’t seem to have ever been about consumer choice, it’s about the choice of other competitors to have access to Apple’s customers without having to play by Apple’s rules. Just look at who was pushing for sideloading on iOS, I mostly saw Meta and Epic Games at the forefront. Why should Apple compromise my device’s integrity so that Meta can spy on me? I have no good answer to that.

view more: next ›