We are running Windows Server 2008 R2 Enterprise on a Cisco UCS B200 M3 blade server that is diskless. Currently, we are going through a EMC SAN lifecycle/upgrade. I need to perform this migration from the host and we are unable to leverage SAN based technologies for replication. What are my options to migrate from one SAN to another? I've attempted to ImageX the C: drive and redeploy the image. This failed. I also have tried Windows Server Backup & Restore, this successfully backed up and redeployed the C: drive. When I went to switch over the initiator in the Cisco UCS blade profile to the new SAN, the server would not boot. Is there follow up from this that I need to complete? Is there another method that I should be looking at that I am currently not considering? When trying to use software RAID1 mirroring native to the operating system. I don't fully understand the implications, can you help me understand the implications. If I convert the disk (C:) can I still boot from it after it's been migrated? See images. Thanks!
Windows 2008 R2 Boot-from-SAN Migration Methods
bootmigrationstorage-area-networkwindows-server-2008
Related Solutions
This answer has been edited after the question was clarified.
What are other reasons effects clouds to prefer DAS
Where "DAS" means Direct Attached Storage, i.e. SATA or SAS harddisk drives.
Cloud vendors all use DAS because it offers order-of-magnitude improvements in price/performance. It is a case of scaling horizontally.
In short, SATA harddisk drives and SATA controllers are cheap commodities. They are mass-market products, and are priced very low. By building a large cluster of cheap PCs with cheap SATA drives, Google, Amazon and others obtain vast capacity at a very low price point. They then add their own software layer on top. Their software does multi-server replication for performance and reliability, monitoring, re-balancing replication after hardware failure, and other things.
You could take a look at MogileFS as a simpler representative of the kind of software that Google, Amazon and others use for storage. It's a different implementation of course, but it shares many of the same design goals and solutions as the large-scale systems. If you want to, here is a jumping point for learning more about GoogleFS.
stated later in the paper, Clouds should use SAN or NAS because of DAS is not appropriate when a VM moves to another server
There are 2 reasons why SAN's are not used.
1) Price. SAN's are hugely expensive at large scale. While they may be the technically "best" solution, they are typically not used at very large scale installations due to the cost.
2) The CAP Theorem Eric Brewer's CAP theorem shows that at very large scale you cannot maintain strong consistency while keeping acceptable reliability, fault tolerance, and performance. SAN's are an attempt at making strong consistency in hardware. That may work nicely for a 5.000 server installation, but it has never been proved to work for Google's 250.000+ servers.
Result: So far the cloud computing vendors have chosen to push the complexity of maintaining server state to the application developer. Current cloud offerings do not provide consistent state for each virtual machine. Application servers (virtual machines) may crash and their local data be lost at any time.
Each vendor then has their own implementation of persistent storage, which you're supposed to use for important data. Amazon's offerings are nice examples; MySQL, SimpleDB, and Simple Storage Service. These offerings themselves reflect the CAP theorem -- the MySQL instance has strong consistency, but limited scalability. SimpleDB and S3 scale fantastically, but are only eventually consistent.
Avg. Disk sec /Read: avg: 0.013 Max: 0.041
Avg. Disk sec /Write: Avg: 0.008 Max: 0.153
The ONLY relevant counters I see. Really. Queue lentsh are sort of very hard to judge.
For a high end san, both average and high numbres are WAY to high. Looks like either an IO bottleneck or a config issue somewhere.
The peformance of the machine seems very good to me, but being locked in a battle with Oracle to prove it is their software that is causing disk issues rather than the SAN itself is quite frustrating.
Mostly because it is the SAN. It is SLOW. The numbers would be way too high for a mid range DAS system like I have (Velociraptors, no SAS discs), for a real SAN they are really really really high.
but the Maximum Queue length has me worried, it ties in with what Oracle said that disk access is slow.
Now, this is the tricky thing. Queue length interpretation depends on SO many factors it is not eve nfunny to say. 756k disc queue length means oracle dumps a LOT of stuff on the SAN and the SAN does not answer. Indicates a bottleneck, clearly. But what do the numbers mean?
On the other hand, Sec/Write went from 0.008 t .153 seconds. 0.153 is REALLY slow. 0.008 is not reall fast to start with (assuming a real SAN).
Definitely not an Oracle issue - your disc subsystem is bottlenecking.
Best Answer
To clone a drive, we ultimately booted to a Oracle Enterprise Linux (RedHat) DVD, jumped out of the installer and used dd to copy the devices. This was a block by block copy between two storage arrays.