My answer, which I give from hard-earned experience, is: Don't do this. Don't try to copy a directory hierarchy that makes heavy use of hard links, such as one created using rsnapshot
or rsync --link-dest
or similar. It won't work on anything but small datasets. At least, not reliably. (Your mileage may vary, of course; perhaps your backup datasets are much smaller than mine were.)
The problem with using rsync --hard-links
to recreate the hard-linked structure of files on the destination side is that discovering the hard-links on the source side is hard. rsync
has to build a map of inodes in memory to find the hard-links, and unless your source has relatively few files, this can and will blow up. In my case, when I learned of this problem and was looking around for alternate solutions, I tried cp -a
, which is also supposed to preserve the hard-link structure of files in the destination. It churned away for a long time and then finally died (with a segfault, or something like that).
My recommendation is to set aside an entire partition for your rsnapshot
backup. When it fills up, bring another partition online. It is much easier to move around hard-link-heavy datasets as entire partitions, rather than as individual files.
The rsync
command's -H
(or --hard-links
) option will, in theory, do what you are trying to accomplish, which is, in brief: to create a copy of your filesystem that preserves the hard linked structure of the original. As I mentioned in my answer to another similar question, this option is doomed to fail once your source filesystem grows beyond a certain threshold of hard link complexity.
The precise location of that threshold may depend on your RAM and the total number of hard links (and probably a number of other things), but I have found that there's no point in trying to define it precisely. What really matters is that the threshold is all-too-easy to cross in real-world situations, and you won't know that you have crossed it, until the day comes that you try to run an rsync -aH
or a cp -a
that struggles and eventually fails.
What I recommend is this: Copy your heavily hard linked filesystem as one unit, not as files. That is, copy the entire filesystem partition as one big blob. There are a number of tools available to do this, but the most ubiquitous is dd
.
With stock firmware, your QNAP NAS should have dd
built in, as well as fdisk
. With fdisk
, create a partition on the destination drive that is at least as large as the source partition. Then, use dd
to create an exact copy of your source partition on the newly created destination partition.
While the dd
copy is in progress, you must ensure that nothing changes in the source filesystem, lest you end up with a corrupted copy on the destination. One way to do that is to umount
the source before starting the copying process; another way is to mount the source in read-only mode.
Best Answer
You are right about backup files being hard links and it is safe to just delete the backup directory.
Hard links are just pointers, so if a file have two hard links then the space occupied by this file will only be reclaimed by OS when both links are deleted.