[-] deepdive@lemmy.world 5 points 6 months ago

As long as they continue to maintain the github repository and keep it free without any hidden ads/spyware or restrictions, I will continue to use their service.

[-] deepdive@lemmy.world 15 points 6 months ago

FlorisBoard is also back in the game :)

[-] deepdive@lemmy.world 5 points 6 months ago

Have you tried Obtainium as f-droid alternative? It's a really cool project with some degree of customization !

Migration takes some time specially if you have a dozen apps, but after that everything is automated !

68
submitted 6 months ago* (last edited 6 months ago) by deepdive@lemmy.world to c/privacy@lemmy.ml

Heyha !

This is probably going to be long take and it's late here in europe... So for those who bare with me and are ready to read through my broken English, thank you.

I'm personally concerned about how my data and my identity is used against my will while surfing the web or using/hosting services. Self-hoster and networking enthousiast, I have some entry/medium security infrastructure.

Ranging from self-hosted adblocker, dns, router, vlans, containers, server, firewall, wireguard, VPN... you name it ! I was pretty happy to see all my traffic being encrypted through wireshark and having what I consider a solid homelab.

Also having most undesired dns/ads blocked with adguard in firefox with custom configuration, blocking everything, and changing some about:config options:

  • privacy.resistFingerprinting
  • privacy.trackingprotection.fingerprinting.enabled
  • ...

I though I had some pretty harden security and safe browsing experience, but oh my I was wrong...

From pixel tracking, to WebRTC leaking your real ip, fonts fingreprinting, canvas fingreprinting, audio fingerprinting, android default keyboard sending samples, ssl certificate with known vulnerabilities...

And most of them are not even some new tracking tech... I mean even firefox 54 was aware of most of these way of fingerprinting the user, and it makes me feel firefox is just another hidden evil-corp hiding with a fancy privacy facade ! Uhhg...

And even if you somehow randomize those fingerprint, user-agent and block most of those things, this makes you stand out of the mass and makes you even easier to track or fingerprint. Yeah something I read recently and it actually make sense... the best way to be somehow invisible is actually to blend into the mass... If you stand out, you are pretty sure to be notices and identified (if that makes sense :/)

This really makes me depressed right now... It feels like a losing battle where my energy is just being wasted to try to have some privacy and anonimity on the web... While fighting against the new laws ringing on our doors and big tech company always having two steps ahead...

I'm really asking myself if it really matters and if it actually make sense to use harden technology or browsers like arkenfox or the tor browser whose end node are mostly intercepted by private institutions and governemental institutions...

I'm probably overthinking and falling into a deep hole... But the more i dig into security and privacy, the more I get the feeling that this is an already lost battle against big tech...

Some recent source:

https://avoidthehack.com/firefox-privacy-config

[-] deepdive@lemmy.world 12 points 6 months ago* (last edited 6 months ago)

If you want to run your own pki with self-signed certificate in your homelab I really encourage you to read through this tutorial. There is a lot to process and read and it will take you some time to set everything up and understand every terminology but after that:

  • Own self-signed certificate with SAN wildcards (https://*.home.lab)
  • Certificate chain of trust
  • CSR with your own configuration
  • CRL and certificate revocation
  • X509 extensions

After everything is in place, you can write your own script that revoks, write and generates your certificate, but that is another story !

Put everything behind your reverse proxy of choice (traefik in my case) and serve all your docker services with your own self-signed wildcard certificates ! It's complex but if you have spare time and are willing to learn something new, it's worth the effort !

Keep in mind to never expose such certificates on the wild wild west ! Keep those certificate in a closed homelab you access through a secure tunnel on your LAN !

edit

Always take notes, to keep track of what you did and how you solved some issues and always make some visuals to have a better understanding on how things work !

[-] deepdive@lemmy.world 3 points 6 months ago

Then, I tried ownCloud for the first time. Wow, it was fast! Uploading an 8GB folder took just 3 minutes compared to the 25 minutes it took with Nextcloud. Plus, everything was lightning quick on the same machine. I really loved using it. Unfortunately, there’s currently a vulnerability affecting it, which led me to uninstall it.

I have no idea on how you access your self-hosted services but wireguard could help you out to access all your service from all your devices, with less security risks and only one point of failure (the wireguard port). Also this takes away most of the vulnerabilities you could be exposed to, because you access all your home services through a secure tunnel without directly exposing the api ports on your router !

I personally run all my services with docker-compose + traefik + self signed CA certificats + adguardhome dns rewrite. And access all my services through https://service.home.lab on all my devices ! It took me some time to set everything up nicely but right now I'm pretty happy how everything works !

About the current ownCloud vulnerability, they already took some measure and the new docker image has the phpinfo fix (uhhg). Also while I wouldn't take their word for granted:

"The importance of ownCloud’s open source in the enterprise and public-sector markets is embraced by both organizations.”

[-] deepdive@lemmy.world 7 points 7 months ago

That's way exposing your home services to the internet is a bad idea. Accessing it through a secure tunnel is the way to go.

Also, they already "fixed" the docker image with an update, something todo with phpinfo...

[-] deepdive@lemmy.world 12 points 7 months ago

What about the missing about:config feature ? :/

18
submitted 7 months ago by deepdive@lemmy.world to c/linux@lemmy.ml

Hi everyone :)

After installing the emacs package and trying to remove it afterwards:

sudo apt remove --purge --autoremove emacs

It only removed that package and not the other dependencies installed with it (emacs-gtk, emacs-common...). I had to manually remove them one-by-one.

Isn't that command supposed:

  • remove package
  • it's configuration files
  • remove unused packages automatically installed ?

What am I missing here?

Also after reading the Stupid things you've done that broke your Linux installation post, I read a lot of people messing up their debian system after using the above command... So I assume that's not the correct way of doing things in Linux?

Some insight from experienced user would be great :)

22
submitted 7 months ago* (last edited 7 months ago) by deepdive@lemmy.world to c/privacy@lemmy.ml

Hi everyone !

Right now I use:

  • Firefox's full protection with everything blocked by default
  • AdGuard adblocker extension
  • Adguardhome DNS blocker
  • ProtonVPN through wireguard
  • Selfhosted searxng instance (metasearch engine aggregator).

While this gives me reasonable doubt of protection/privacy, this blocks me out to interact with FOSS projects on github, which kindda sucks!! I don't want to accepts GitHub's long cookie list of tracking and statistics, but not being able too interact and help FOSS project to thrive, improve, get some visibility, will in the long term hurt FOSS projects.

I'm aware of GitHub's cookie management preferences, but I don't trust them to manage and choose what should be accepted or not !

Firefox only allows to block/accept everything and all extensions are just to delete them. I couldn't find any related and somehow workaround on this issue.

Q: Is there anyway to only accept cookies allowing me to login and interact with repos without accepting those tracking and analytic cookies?

If you have any solution/workaround to share, I'm all ears !


Edit

I learned a few new things today:

  • Adguard AdBlocker extension for firefox allows to block cookies before they enter into your system
  • User Agent spoofing addon
  • Firefox privacy.fingerprintingProtection is not activated by default for everthing

– How to block specific cookies with the Adguard Adblocker extension

⚠️ This can and will cause the website to malfunction if you block the wrongs cookies ⚠️

To find out what specific cookie you want to block, you first need to know his name. For firefox you need to open the application menu -> more tools -> web developer tools OR right click inspect (keyboard shurtcuts depends on your system).

In the web developer tools windows go to STORAGE -> cookies.

githubcookiesexemple

After you found out what additional non-essential cookies you want to block out you need to add them in the AdGuard user rules:

||github.com/$cookie=tz
||github.com/$cookie=preferred_color_mode
||github.com/$cookie=color_mode
||github.com/$cookie=saved_user_sessions
||github.com/^$third-party

To read more about on how to create you own ad filters read the official documentation.

– User Agent spoofing

User agent string switcher

This extension allows you to spoof your browser "user-agent" string to a custom designation, making it impossible for websites to know specific details about your browsing arrangement.

– Firefox about:config privacy.fingerprintingProtection = true

Firefox's documentation is pretty straightforward but here is what they are saying about:

However, the Canvas Permission Prompt is not the only thing that Fingerprinting Protection is doing. Fingerprinting Detection changes how you are detected online:

  • Your timezone is reported to be UTC
  • Not all fonts installed on your computer are available to webpages
  • The browser window prefers to be set to a specific size
  • Your browser reports a specific, common version number and operating system
  • Your keyboard layout and language is disguised
  • Your webcam and microphone capabilities are disguised
  • The Media Statistics Web API reports misleading information
  • Any Site-Specific Zoom settings are not applied
  • The WebSpeech, Gamepad, Sensors, and Performance Web APIs are disabled

Type about:config in the address bar and press EnterReturn. A warning page may appear. Click Accept the Risk and Continue to go to the about:config page. Search for privacy.resistFingerprinting and set it to true. You can double-click the preference or click the Toggle Fx71aboutconfig-ToggleButton button to toggle the setting.

If it is bolded and already set to true, you, or an extension you installed, may have enabled this preference. If you discover the setting has become re-enabled, it is likely a Web Extension you have installed is setting it for you.


Closing thoughts

This may seem overkill for some people and I get it, but if you are really concerned about your privacy/security, there is nothing as "one-click/done" privacy. It's hard-work and a every day battle with E-corp and other hidden institutions that gather every bit of fingerprints/trace you leave behind ! I hope this long edit will help some people to have a more private and safer web browsing !

[-] deepdive@lemmy.world 3 points 7 months ago

Have you managed to selfhost it ? Funkwhale looks great, but the installation process with another proxy than nginx in a container setup is far from ready and accessible to hobby selfhosters.

If you have no idea about proxies and headers forwarder... just don't waste your time and go straight to audiobookshelf !

[-] deepdive@lemmy.world 11 points 7 months ago

They also have an I2P address. More secure than Tor, because alot of the end nodes are controlled by government and private institutions.

[-] deepdive@lemmy.world 10 points 7 months ago* (last edited 7 months ago)

I tried it 3 months ago. It looked nice had some cool features, but It didn't fit into my personal selfhosted Home server.

This is more or like to help less-tech savy people to secure their infrastructure, which is a good point, but can't replace a complex wireguard, VPN, opnsense, 2FA , self-signed CA, docker installation.

It's a bit like Nginx proxy manager, it's good enough, does what it is suposed to do with minimal user inputs. Less prone to error, security issues...

[-] deepdive@lemmy.world 3 points 7 months ago* (last edited 7 months ago)

Wait until you experience your first astral projection ! But yeah, when I tried it a second time I also fell into sleep paralysis... Scary shit !

2
submitted 11 months ago by deepdive@lemmy.world to c/linux@lemmy.ml

Yeah another post about backups, but hear me out.

I read most of the other post here on lemmy, read through the documentation from different backup tools (rsync, Borg, timeshift) but all those backup tools are for "static" files.

I mean I have a few docker container with databases, syncthing to sync files between server, Android, Desktop and Mac, a few samba shares between Server, Mac and Desktop.

Per say on Borg's documentation:

  • Avoid running any programs that might change the files.
  • Snapshot files, filesystems, container storage volumes, or logical volumes. LVM or ZFS might be useful here.
  • Dump databases or stop the database servers.
  • Shut down virtual machines before backing up their images.
  • Shut down containers before backing up their storage volumes.

How I'm supposed to make a complete automated backup of my system if my files are constantly changing ? If I have to stop my containers, shutdown syncthing and my samba shares to make a full backup, that seams a bit to much of friction and prone to errors...

Also, nowhere I could find any mention on how to restore a full backup with a LVM partition system on a new installed system. (User creation, filesystem partition...)

Maybe, I have a bad understanding on how It works with linux files but doing a full backup this way feels unreliable and prone to corrupted files and backup on a server.

VMs are easier to rollback with snapshots and could't find a similar way on a bare metal server...

I hope anyone could point me to the right direction, because right now I have the feeling I can only backup my compose-files and do a full installation and reconfiguration, which is supposed to be the work of a backup... Not having to reconfigure everything !

Thanks

[-] deepdive@lemmy.world 5 points 11 months ago

Just self-host your toilet 😎

26
submitted 11 months ago* (last edited 11 months ago) by deepdive@lemmy.world to c/linux@lemmy.ml

Hi everyone 🙂

TLDR

How do you work with debian and su permission and what's the best way to do it for better security?

  • Add an user in the sudoers?
  • Give special permissions to a group? User?
  • Always connect to su - (default root)?
  • Add users to groups?

The story is unrelated to the question, but is a direct cause

This is rookie question even though I use linux (ubuntu and recently debian) regularly and have alot of selfhosted docker containers on an old spare laptop.

While this is probably one of the basics you need to know right away when playing arround with sudo or su I wasn't aware of how you can f#ck everything up with a single command

chmod -R xxx /home/$USER

chown -R ...

Why would you do that? Because I'm stupid and how sometimes no idea what I'm doin? I was actually trying to change some permission to create a samba share (that's another story xD).

Trying to revert everything, alot of my docker containers, certificates and special files were unreadable, unexecutable... That broke my nextcloud instance, synchthing functionalities, linkding http shortcut...

With that big incident, I learned how users, root, sudo/su permission work and recently found out you can add users to groups, like docker so you don't have to 'sudo docker' everytime.

My question

How do you work with debian and su permission and what's the best way to do it for better security?

  • Add an user in the sudoers?
  • Give special permissions to a group? User?
  • Always connect to su - (default root)?
  • Add users to groups?

Because this is in a homelab environment, there is a minimal risk compared to exposed instances, but I'm interested to learn the best practice right away !

Thank you 😊

-1

Hi everyone !

I just learned the hardway how important it is to make the difference between ` and ' ...

Tried for 1 whole day to figure out why I got a stupid error in traefik with rule=host(hostname).

While the logs weren't clear about what was raising the error:

error while parsing rule Host('vaultwarden.home.lab'): 1:6: illegal rune literal

I tried alot of different things, from changing the self signed certs, wildcards, adguard DNS rewrite, changing network add some wired traefik rules... To finally compare 2 different yaml files and having that "ahhh..." feeling...

Bit depressing but finally happy to have my own self signed local domain names.

1

For those selfhosting linkding and searxng (or using google, duckduckgo, brave) there is a very cool and useful extension: linkding injector !

It's documented in the linkding readme, but it's worth mentioning for those who didn't knew about it ! It works great with selfhosted searxNG instances and is very useful to search through your bookmarks.

linkding — selfhosted bookmark manager

searxNG — selfhosted meta search engine

1
submitted 1 year ago* (last edited 1 year ago) by deepdive@lemmy.world to c/selfhosted@lemmy.world

Hi everybody !

While I really like the simple and sleek google calendar web GUI and functionalities, I'm more and more concerned about my data and privacy. Even if I have nothing to hide, I don't agree anymore to sell freely and consciously my data to any GAFAM.

Has anyone any alternative to google calendar?

  • Free and if possible, open source? It can have some discret sponsors/ads. As long as it isn't to intrusive.
  • Todoist integration
  • Sync between devices
  • GUI doesn't have to be PERFECT, but a bare minimum for my candy eyes !
  • Can be API, Web... doesn't matter as long as it syncs between devices (android, mac, windows, linux)

I already searched through the web, but couldn't find any conclusive alternative, maybe someone knows some hidden gem :)

Thank you !


EDIT: The solution and compromise: nextcloud. It took me some times (2days) to set it up correctly and make it work as intended.

  • Android calendar sync with DAVx5
  • Calendar notification on android's native calendar app
  • 2way sync between Android calendar and nextcloud calendar
  • push notification on nextcloud web browser

A few things too keep in mind:

1 — if you build your nextcloud instance with docker-compose:

2 — Android permissions to sync with your calendar

  • DAVx5 mentions how to allow syncing seemingly
    • It's different for every android phone
    • Battery power mode
    • Work in the background
    • ...

3 — It won't work with todoist

  • Todoist is proprietary and won't work with DAVx5 and next cloud
  • alternative: jtx board! (build by the same devs as DAVx5 seems to work similarly)

Conclusion: Nextcloud isn't as good as the cloud sync provided by google/todoist and every other GAFAM cloud instance. It has his quirks and need some attention to make it work as intended. It take some times, reading and tinkering but those are compromises I'm willing to take :)

1
submitted 1 year ago* (last edited 1 year ago) by deepdive@lemmy.world to c/linux@lemmy.ml

Hello there :)

As far as I know (searched through the web), editing/navigating a multiline in bash is not possible and opening nano, pasting and editing is to much friction I want to get rid off.

Do you have any way to speed up the process?

example of multiline:

echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Thank you :)

view more: next ›

deepdive

joined 1 year ago