82
Deduplication tool
(lemmy.world)
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
I don't actually know but I bet that's relatively costly so I would at least try to be mindful of efficiency, e.g
find
to start only with large files, e.g > 1Gb (depends on your own threshold)then after trying a couple of times
and possibly heuristics e.g
Why do I suggest all this rather than a tool? Because I be a lot of decisions have to be manually made.
FWIW just did a quick test with
rmlint
and I would definitely not trust an automated tool to remove on my filesystem, as a user. If it's for a proper data filesystem, basically a database, sure, but otherwise there are plenty of legitimate duplication, e.g./node_modules
, so the risk of breaking things is relatively high. IMHO it's better to learn why there are duplicates on case by case basis but again I don't know your specific use case so maybe it'd fit.PS: I imagine it'd be good for a content library, e.g ebooks, ROMs, movies, etc.