I don't do much manual system administration anymore. I view my infrastructure as a programmable entity, and treat it as such, by configuring systems with tools that automate configuration management, EC2 node maintenance, etc. Tools in my toolbox:
- Ruby (my favorite scripting/tool language)
- Git (version control)
- Opscode's Chef (written in Ruby) (1)
- Capistrano (ad hoc mass-maintenance)
- Amazon's EC2 API tools for instance and image maintenance.
- Rightscale's AWS gem (Ruby bindings for EC2)
(1) - Disclosure, I work for Opscode. Other tools fill this space like Reductive Lab's Puppet.
I do bundle up an AMI when I've got a node built the way I need for a specific function. For example, if I'm building a Rails app server, I'll get all the prerequisite packages installed to save time on build.
When all else fails, I log into systems with SSH. I did manual system administration for many many years, this is old hat.
Are you using some form of Windowing
system and remote-desktop equivalent
to access the box, or is it all
command line?
I don't install any GUI on servers unless a package has a dependency and one gets auto-installed.
Is there an equivalent to this in the
Linux world? (transferring files)
I normally do two types of file transfer/file maintenance.
- Package installation
- Configuration files
For packages native to the platform, I use the standard package management tool like APT or YUM. For source installs (something.tar.gz) I generally download via wget.
Configuration files are typically ERB templates managed by Chef.
I use SSH and SCP/SFTP to transfer files manually.
Are you doing your config
changes/script tweaks directly on the
machine? Or do you have something set
up on your local box to edit these
files remotely? Or are you simply
editing them remotely then
transferring them at each save?
I keep everything related to managing systems in a software control repository. Here's my typical workflow when updating configuration on one or more systems. I start from my local workstation.
- Pull from master Git repository for others' changes.
- Edit file(s) locally (like, update a configuration file).
- Commit the change, push to master.
- On Chef server (logged in via SSH), pull the latest change I just committed.
- Deploy the configuration to the appropriate place on the Chef server (I use Rake for this).
- Chef clients run on an interval, so they will pick up changes every 30 minutes. If I need something immediately, I run chef-client manually.
- Verify the change!
How are you moving files back and
forth between EC2 and your local
environment? FTP? Some sort of Mapped
Drive via VPN?
There's a few locations where files I use on EC2 nodes might be stored.
- Chef server. Configuration templates mainly, some small packages too.
- GitHub. We store our code (open source projects) on GitHub. EC2 nodes can get to this easily (such as for a checkout of the latest version of something).
- Amazon S3 buckets. Some things get stored in a bucket.
I do a lot of work in EC2, primarily testing environments and changes. As a result of my tools and workflow, I spend more time working on things I actually care about and less on dealing with individual files and thinking about specific configurations.
I think you want to look into mirror mode of lftp.
Rsync doesn't work over ftp (that would be the normal 'default' admin choice).
I strongly recommend you migrate to shared hosting that supports ssh terminal access which you can then use rsync over. By rolling out your own solution, you are only digging your hole deeper into this crappy (Your words) setup. I imagine the same amount of effort and time (if not less) to migrate to a better shared hosting provider.
Best Answer
If the files are already on an EBS volume (and if you care about them, why aren't they?):
Create a snapshot of the EBS volume containing the files on the first instance.
Create an EBS volume from that snapshot.
Attach the EBS volume to the second instance.
The new EBS volume may be a bit slow for a bit while it's filling in blocks from the snapshot, but it will be usable right away.
ALTERNATIVE (If the files are not already on an EBS volume):
Attach a new EBS volume to the first instance.
Copy the files from other disks to the new EBS volume.
Move the EBS volume to the second instance.