What Makes Cloud Storage (Amazon AWS, Microsoft Azure, google Apps) different from Traditional Data center storage networking (SAN and NAS)

cloud computingcloud-storagedirect-attached-storagenetwork-attached-storagestorage-area-network

Some confusion because of my question so to make it simple :

"What kind of storage do big cloud providers use and why?"

As far as i understand, however I am not able to find any kind of official Storage networking differences between typical data centers and clouds, all cloud providers are using DAS different from the typical data centers.

Even DAS has many disadvantages than SAN or NAS, i want to learn the details why clouds using DAS either for storage or application purposes.

Any resource or description will be appreciated to make me clear.

EDIT: While reading the paper "Networking Challenges and Resultant Approaches for Large Scale Cloud Construction,David Bernstein and Erik Ludvigson (Cisco)" they mention that,

Curiously we do not see Clouds from the major providers using NAS or SAN. The typical Cloud architecture uses DAS, which is not typical of Datacenter storages approaches.

But here there is a conflict: in my opinion and also stated later in the paper, Clouds should use SAN or NAS because of DAS is not appropriate when a VM moves to another server yet still needs to access storage from original server.

What are other reasons effects clouds to prefer DAS, NAS or SAN?
what kind of storage do big cloud providers use and why?

Best Answer

This answer has been edited after the question was clarified.

What are other reasons effects clouds to prefer DAS

Where "DAS" means Direct Attached Storage, i.e. SATA or SAS harddisk drives.

Cloud vendors all use DAS because it offers order-of-magnitude improvements in price/performance. It is a case of scaling horizontally.

In short, SATA harddisk drives and SATA controllers are cheap commodities. They are mass-market products, and are priced very low. By building a large cluster of cheap PCs with cheap SATA drives, Google, Amazon and others obtain vast capacity at a very low price point. They then add their own software layer on top. Their software does multi-server replication for performance and reliability, monitoring, re-balancing replication after hardware failure, and other things.

You could take a look at MogileFS as a simpler representative of the kind of software that Google, Amazon and others use for storage. It's a different implementation of course, but it shares many of the same design goals and solutions as the large-scale systems. If you want to, here is a jumping point for learning more about GoogleFS.

stated later in the paper, Clouds should use SAN or NAS because of DAS is not appropriate when a VM moves to another server

There are 2 reasons why SAN's are not used.

1) Price. SAN's are hugely expensive at large scale. While they may be the technically "best" solution, they are typically not used at very large scale installations due to the cost.

2) The CAP Theorem Eric Brewer's CAP theorem shows that at very large scale you cannot maintain strong consistency while keeping acceptable reliability, fault tolerance, and performance. SAN's are an attempt at making strong consistency in hardware. That may work nicely for a 5.000 server installation, but it has never been proved to work for Google's 250.000+ servers.

Result: So far the cloud computing vendors have chosen to push the complexity of maintaining server state to the application developer. Current cloud offerings do not provide consistent state for each virtual machine. Application servers (virtual machines) may crash and their local data be lost at any time.

Each vendor then has their own implementation of persistent storage, which you're supposed to use for important data. Amazon's offerings are nice examples; MySQL, SimpleDB, and Simple Storage Service. These offerings themselves reflect the CAP theorem -- the MySQL instance has strong consistency, but limited scalability. SimpleDB and S3 scale fantastically, but are only eventually consistent.