I think the best way is to redirect the output to a file and then scp that file to the remote host and then you can run the cat command there.
$ head -c 5 /dev/urandom > random && scp ./random user@remoteip:/path/. && cat /path/random
Hope this will satisfy your needs. Reply if it don't.
One of the tricks I follow is to put #
in the beginning while using the rm
command.
root@localhost:~# #rm -rf /
This prevents accidental execution of rm
on the wrong file/directory. Once verified, remove #
from the beginning. This trick works, because in Bash a word beginning with #
causes that word and all remaining characters on that line to be ignored. So the command is simply ignored.
OR
If you want to prevent any important directory, there is one more trick.
Create a file named -i
in that directory. How can such a odd file be created? Using touch -- -i
or touch ./-i
Now try rm -rf *
:
sachin@sachin-ThinkPad-T420:~$ touch {1..4}
sachin@sachin-ThinkPad-T420:~$ touch -- -i
sachin@sachin-ThinkPad-T420:~$ ls
1 2 3 4 -i
sachin@sachin-ThinkPad-T420:~$ rm -rf *
rm: remove regular empty file `1'? n
rm: remove regular empty file `2'?
Here the *
will expand -i
to the command line, so your command ultimately becomes rm -rf -i
. Thus command will prompt before removal. You can put this file in your /
, /home/
, /etc/
, etc.
OR
Use --preserve-root
as an option to rm
. In the rm
included in newer coreutils
packages, this option is the default.
--preserve-root
do not remove `/' (default)
OR
Use safe-rm
Excerpt from the web site:
Safe-rm is a safety tool intended to prevent the accidental deletion
of important files by replacing /bin/rm with a wrapper, which checks
the given arguments against a configurable blacklist of files and
directories that should never be removed.
Users who attempt to delete one of these protected files or
directories will not be able to do so and will be shown a warning
message instead:
$ rm -rf /usr
Skipping /usr
Best Answer
If your "local drive" is a linux client, you might just use a remote tar command to print the output to stdout (the default, an explicit option specification would be "-f -")and pipe it to a local tar that reads from stdin (explicit option "-f -" again) like that:
When using the "-z" option, Tar will compress your data with gzip's default compression level (6). If you want a better compression rate and have CPU cycles to spare, you might use "-j" instead, but if it is an old/virtual machine with a fast link, you might end up with an overall lower transfer rate.
Oh, and as an edit: you might specify the -C (compress) option with SCP, this will use gzip as a compression algorithm as well, although the compression rates usually will be slightly lower than with a tar/gzip combination. It is less to type and you will get nifty progress indicators as a bonus.