I have a backup script that:
- compress some files
- generate md5
- copy the compressed file to another server.
- the other server finishes comparing MD5 (to find copy errors).
Here it's the core script:
nice -n 15 tar -czvf $BKP $PATH_BKP/*.* \
| xargs -I '{}' sh -c "test -f '{}' && md5sum '{}'" \
| tee $MD5
scp -l 80000 $BKP $SCP_BKP
scp $MD5 $SCP_BKP
This routine got CPU at 90% at gzip routine, slowing down the production server. I tried to add a nice -n 15
but server still hangs.
I've already read 1 but the conversation didn't help me.
What is the best approach to solve this issue ?
I am open to new architectures/solutions 🙂
Best Answer
If you use nice, you change the priority, but this will have a noticeable impact only if the CPU is close to 100% usage.
The server becomes slow, in your case, not because of the CPU usage, but because of the I/O on the storage. Use
ionice
to change the I/O priority and keep thenice
for CPU priority.