Ibm V3700 SAN / Howto add additional disks in existing pool

raidstorage-area-network

We have IBM v3700 San Storage(300gb x 36 SAS HDD) connected to Four servers (windows 2008) via FC. Each server have few disks allocated in RAID5 mode.

There are 8 unused (candidate) disks available in the slots. We want to add 2 disks per server to EXPAND existing pools. example each server have G: drive and want to expand the G drive using these 2 additional disks.

What are my best options? How can i add 2 disks to each server pool. I see it gives me few raid option like raid0,10,5 , Is this possible I simply add 2 disk to existing raid5 to get maximum space and raid failover could be covered by existing raid 5 spare?

Example:

8 drives raid5 is mounted on SERVER1, volume name is G: so 2 tb space is available, now I want to add 2 disk spaces in it to make it 2.6 TB , can i add two disks space in it? do i have to select raid5 for it and then EXPAND existing G: drive to 2.6 tb? possible?

Or what should i do ? please suggest.

Best Answer

Per the documentation, p 393, you can only expand volumes, not pools:

8.4.8 Expanding a volume

The IBM Storwize V3700 can expand volumes. This feature should be used only if the host OS supports it. This capability increases the capacity that is allocate d to the particular volume by the amount specified. To expand a volume, complete the following steps:

...

However, you can migrate your volumes to a new pool that is larger, and then expand your volume:

8.4.9 Migrating a volume to another Storage Pool

The IBM Storwize V3700 supports online volume migration while applications are running. Using volume migration, volumes can be moved between Storage Pools. ...

Assuming you currently have 3-disk RAID5 pools for your hosts, you can:

  1. Create a new pool using 5 disks in a RAID5 configuration.
  2. Migrate the existing volume(s) for one server to the new pool.
  3. Expand the volume(s) that are now on the new pool.
  4. Take the disks freed up and repeat for each server.

Also, 8 drives is getting a bit large for RAID5. You'll almost certainly get better performance striping your volume(s) across two 5-drive RAID5 arrays, especially if you match the RAID5 stripe size to the file system block size and align your disk partition(s) to line up with RAID stripe boundaries. You won't have quite as much available storage as you'll have more parity drives, but then that will improve your availability.

By using a power-of-two number of data disks in a RAID5 or RAID6 array, you can much more easily match the RAID stripe size and alignment to the "natural" IO size used to read/write data. For example, if the filesystem block size happens to be 128KB, you can set up a 5-disk RAID5 array to have a RAID stripe size of 128KB. You can't do that with an 8-disk RAID5 array, for example.

A write to a RAID5 or RAID6 array that doesn't completely overwrite an entire stripe results in a "read-modify-write" operation, best explained here (http://www.infostor.com/index/articles/display/107505/articles/infostor/volume-5/issue-7/features/special-report/raid-revisited-a-technical-look-at-raid-5.html):

Read-modify-write

Consider a stripe composed of four strips of data and one strip of parity. Suppose the host wants to change just a small amount of data that takes up the space on only one strip within the stripe. The RAID controller cannot simply write that small portion of data and consider the request complete. It must also update the parity data. Remember that the parity data is calculated by performing XOR operations on every strip within the stripe. So, when one or more strips change, parity needs to be recalculated.

...

So, take a filesystem configured to use 8KB blocks. Now, what happens when that 8KB block is written to a 10-drive RAID5 array? Oh, and the array was built with a 1MB block size per disk because "bigger is better and faster". But no, it isn't. That means the stripe size across the RAID5 array is a full nine megabytes. So to write that 8KB in the middle of the stripe, the RAID controller needs to read 9 MB of data, modify it with the new 8KB of data, recompute the parity of the stripe, and then write the new data and parity - at the least. The controller may need to write the entire 9MB. There are a lot of optimizations that can be done - and good RAID controllers do them well - but logically that's what has to happen. And lower-end RAID controllers don't do them at all. So that 8KB write might very well turn into a 9MB read followed by a 9MB write.

I have no idea how good the RAID controllers are in an IBM V3700.

Now, take a 5-drive RAID5 array and a file system with a 64KB block size. The array was built with a 16KB per-disk block size, so with 4 data disks the stripe size is 64KB. Now, if the disk partitions are properly aligned, writing a 64KB block matches the array stripe. The controller computes the parity bits for the data and then just writes it to the disks, overwriting the data that was there.

Guess which one is faster.

Related Topic