Would rsync be better than tar, wget and untar

migrationrsynctar

I am migrating to a new server and, as well as html/php files, have a directory which contains around 90,000 files totalling 24GB in size which needs to be moved.

When I did a test migration, I used tar to create a tarball then wget on the new host and then extracted the tar file but, whilst it worked fine, it took around 3 hours all in to complete. That would 3 hours of downtime whilst I did this on the actual migration to ensure no new files come in or files were changed etc.

I am now planning the actual migration and am trying to find quicker ways of doing this and wondered about using rsync – I have only ever used rsync locally and only for a small number of files so would running rsync against 90k items be quicker than the above method?

I am not so worried about CPU, memory or network usage as long as the actual process completes quicker as this will be done out of hours when the system is quieter anyway.

Best Answer

You will have to test to find out. There are many variables such as the speed of your storage system.

Consider restoring your tar archive or backup while the old system is up. Then during the downtime use rsync to copy the remaining changes. It still has to check many file modify times which takes time, but there is much less I/O and network transfer.

This incremental copy is what rsync is good at. Doing it in one go may not be faster than other tools.