First off I'd say you need to take a step back here and learn the difference between block and file storage. I'm not convinced that you truly understand the difference from the phrasing of your question.
You say that you have an iSCSI target machine and mention ZFS. Is your iSCSI target system running Solaris/OpenSolaris/NexentaOS or BSD? If not then you can't realistically use ZFS. (While FUSE might work I would not depend on it for a server)
As to the Windows 2008 cluster systems seeing a ZFS files system on an iSCSI LUN. Not going to happen. Here is where I don't think you understand the difference between block and file storage, I don't know of any decent sites where you can learn about this, maybe someone else reading this can suggest a good site? Here is a link to a halfway decent article about the difference between block and file storage:
http://findarticles.com/p/articles/mi_m0DUJ/is_12_106/ai_n27577413/
I always visualize the storage in layers, in this case you would most likely have:
1) Disks
2) Raid groups (either a ZFS zpool or a traditional RAID group with a couple of disks)
3) Volume (Think C: in Windows or LVM in Linux)
4) Filesystem (NTFS in Windows, ZFS on Solaris, ext3/4 on Linux)
5) Files.
Depending on the iSCSI target implementation the LUNs could be shared up from raw devices (layer #3) or as large files on a filesystem (Layer #5). A ZFS based target would almost certainly use files residing on the ZFS file system.
When you export the LUNs from the iSCSI target the underlying ZFS filesystem becomes invisible to the Windows systems using it. To them they just see a blank disk initially. Then you simply format the LUN using NTFS and install clustering. That adds the following layers to our diagram:
5) LUN file on target (Great big binary blob)
6) iSCSI target layer
7) Volume (i.e. D:)
8) Filesystem (NTFS)
9) User data files.
I hope I've been somewhat clear :-)
And to address your central issue of extending LUNs that are NTFS formatted. That's pretty simple, basically use the extend command in diskpart and the volume will instantly expand to fill all available space. I do it all the time on iSCSI LUNs shared up from a NetApp. Be aware though that extending the C: drive can be awkward as you have to reboot into WinPE to take the volume offline for a moment to do the extend. Perhaps this is fixed in Windows 2008 though, I haven't tried it there yet... (Definitely not the case in Windows 2003, it needs diskpart in WinPE)
Design the right way and you'll minimize the chances of data loss of ZFS. You haven't explained what you're storing on the pool, though. In my applications, it's mostly serving VMWare VMDK's and exporting zvols over iSCSI. 150TB isn't a trivial amount, so I would lean on a professional for scaling advice.
I've never lost data with ZFS.
I have experienced everything else:
But through all of that, there was never an appreciable loss of data. Just downtime. For the VMWare VMDK's sitting on top of this storage, a fsck or reboot was often necessary following an event, but no worse than any other server crash.
As for a ZIL device loss, that depends on design, what you're storing and your I/O and write patterns. The ZIL devices I use are relatively small (4GB-8GB) and function like a write cache. Some people mirror their ZIL devices. Using the high-end STEC SSD devices makes mirroring cost-prohibitive. I use single DDRDrive PCIe cards instead. Plan for battery/UPS protection and use SSD's or PCIe cards with a super-capacitor backup (similar to RAID controller BBWC and FBWC implementations).
Most of my experience has been on the Solaris/OpenSolaris and NexentaStor side of things. I know people use ZFS on FreeBSD, but I'm not sure how far behind zpool versions and other features are. For pure storage deployments, I'd recommend going the Nexentastor route (and talking to an experienced partner), as it's a purpose-built OS and there are more critical deployments running on Solaris derivatives than FreeBSD.
Best Answer
It's not a direct answer to your question, but a more traditional architecture for this sort of thing would be to use HAST and CARP to take care of the storage redundancy.
A basic outline (see the linked documentation for better details):
Machine A ("Master")
Machine B ("Slave")
(HAST will mirror all the data from the Master to the Slave for you)
Both Machines
All the failover magic will be handled for you.
The big caveat here is that HAST only works on a Master/Slave level, so you need pairs of machines for each LUN/set of LUNs you want to export.
Another thing to be aware of is that your storage architecture won't be as flexible as it would be with the design you proposed:
With HAST you're limited to the number of disks you can put in a pair of machines.
With the ISCSI mesh-like structure you proposed you can theoretically add more machines exporting more LUNs and grow as much as you'd like (to the limit of your network).
That tradeoff in flexibility buys you a tested, proven, documented solution that any FreeBSD admin will understand out of the box (or be able to read the handbook and figure out) -- to me it's a worthwhile trade-off :-)