83
Deduplication tool (lemmy.world)
submitted 1 week ago* (last edited 6 days ago) by Agility0971@lemmy.world to c/linux@lemmy.ml

I'm in the process of starting a proper backup solution however over the years I've had a few copy-paste home directory from different systems as a quick and dirty solution. Now I have to pay my technical debt and remove the duplicates. I'm looking for a deduplication tool.

  • accept a destination directory
  • source locations should be deleted after the operation
  • if files content is the same then delete the redundant copy
  • if files content is different, move and change the name to avoid name collision I tried doing it in nautilus but it does not look at the files content, only the file name. Eg if two photos have the same content but different name then it will also create a redundant copy.

Edit: Some comments suggested using btrfs' feature duperemove. This will replace the same file content with points to the same location. This is not what I intend, I intend to remove the redundant files completely.

Edit 2: Another quite cool solution is to use hardlinks. It will replace all occurances of the same data with a hardlink. Then the redundant directories can be traversed and whatever is a link can be deleted. The remaining files will be unique. I'm not going for this myself as I don't trust my self to write a bug free implementation.

you are viewing a single comment's thread
view the rest of the comments
[-] utopiah@lemmy.ml 0 points 1 week ago

I don't actually know but I bet that's relatively costly so I would at least try to be mindful of efficiency, e.g

  • use find to start only with large files, e.g > 1Gb (depends on your own threshold)
  • look for a "cheap" way to find duplicates, e.g exact same size (far from perfect yet I bet is sufficient is most cases)

then after trying a couple of times

  • find a "better" way to avoid duplicates, e.g SHA1 (quite expensive)
  • lower the threshold to include more files, e.g >.1Gb

and possibly heuristics e.g

  • directories where all filenames are identical, maybe based on locate/updatedb that is most likely already indexing your entire filesystems

Why do I suggest all this rather than a tool? Because I be a lot of decisions have to be manually made.

[-] utopiah@lemmy.ml 3 points 1 week ago

fclones https://github.com/pkolaczk/fclones looks great but I didn't use it so can't vouch for it.

[-] paris@lemmy.blahaj.zone 1 points 1 week ago* (last edited 1 week ago)

I was using Radarr/Sonarr to download files via qBittorrent and then hardlink them to an organized directory for Jellyfin, but I set up my container volume mappings incorrectly and it was only copying the files over, not hardlinking them. When I realized this, I fixed the volume mappings and ended up using fclones to deduplicate the existing files and it was amazing. It did exactly what I needed it to and it did it fast. Highly recommend fclones.

I've used it on Windows as well, but I've had much more trouble there since I like to write the output to a file first to double check it before catting the information back into fclones to actually deduplicate the files it found. I think running everything as admin works but I don't remember.

[-] utopiah@lemmy.ml 1 points 1 week ago* (last edited 1 week ago)

FWIW just did a quick test with rmlint and I would definitely not trust an automated tool to remove on my filesystem, as a user. If it's for a proper data filesystem, basically a database, sure, but otherwise there are plenty of legitimate duplication, e.g ./node_modules, so the risk of breaking things is relatively high. IMHO it's better to learn why there are duplicates on case by case basis but again I don't know your specific use case so maybe it'd fit.

PS: I imagine it'd be good for a content library, e.g ebooks, ROMs, movies, etc.

[-] utopiah@lemmy.ml 1 points 1 week ago

if you use rmlint as others suggested here is how to check for path of dupes

jq -c '.[] | select(.type == "duplicate_file").path' rmlint.json

this post was submitted on 23 Jun 2024
83 points (100.0% liked)

Linux

45443 readers
1932 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS