To be able to increase storage space by replacing only a few of the disks, you should use mirrored vdevs, striped together (which amounts to RAID10 indeed).
In your case, with 4 drives that would mean working to something like this:
zpool
mirror
disk1
disk2
mirror
disk3
disk4
This would provide you with 2TB of storage (given all disks are 1TB) and good redundancy (0% of array failure after 1 disk crash, only 33% of array failure with 2 simultaneous disk crashes).
Now to get there I would buy those 2 new 1TB disks, and put those in the pool:
zpool create zpool mirror disk1 disk2
Then move your stuff of the DLINK to the newly created pool.
Once that is done, you can scavenge the DLINK disks and add them to the pool, to increase storage:
zpool add zpool mirror disk3 disk4
If you later want to increase storage even more, you can do that by adding more vdevs (preferably also mirrors) OR by replacing only 2 of the 4 disks. Replacing goes as follows:
zpool offline zpool disk3
# remove physical disk3 at this point
# insert new, bigger disk in place of disk3
zpool online zpool disk3
# wait for resilver
# after resilver, do the same with disk4
# your vdev is now bigger, increasing the size of the pool
Now, let's look at the other option. If you had made 1 raidz vdev like so:
zpool
raidz
disk1
disk2
disk3
disk4
You would have 3TB of storage, but, to increase that storage by just replacing disks (and not adding), you would have to replace ALL 4 disks (one by one ofcourse) to increase pool size! Also this configuration has 100% array failure if 2 disks crash simultaneously.
The raidz configuration would also by slower than the striped mirrors configuration. Since raidz is more computationally intensive, while the stripes + mirrors actually improve read and write performance.
With 'normal' harddisks (non SSD) the striped mirrors will likely fill your gigabit connection for sequential reads and writes, because ZFS can combine the disks' bandwidth (remember 1Gb/s is only ~125 MegaBYTES/s, a standard 'normal' harddisk will give you around 90 Megabytes/s). I don't think the above raidz configuration will be able to do that on consumer hardware.
To conclude, the score for striped mirrors / RAID 10 with your amount of disks is:
+ max redundancy
+ maintenance
- available storage space
+ speed
The score for raidz is:
- max redundancy
- maintenance
+ available storage space
- speed
I would say striped mirrors win :)
A final tip: definitely read up more on the how-to and the why before starting! Maybe even simulate the whole procedure in a Virtual Machine. I'm thinking particularly on the step where you add the second mirror vdev! If you do it wrong you might get a different configuration that you had hoped for and ZFS is very unforgiving in those cases, since it doesn't allow you to remove vdevs from the pool or disks from raidz vdevs!! (removing disks form mirror vdevs is allowed however)
Also, be future proof and label and align your disks, so you don't get into trouble with Advanced Format drives! For more information on the intricacies of ZFS and 4K drives, I suggest you read this thread on the FreeBSD forum.
Best Answer
ZFS isn't designed to scale, or cluster, across more than one single Solaris (or other) system. It's not a distributed filesystem, so any distributed filesystem involving ZFS will have to have another layer that binds together the various mountpoints exposed by the individual systems in your cluster.
Why the ZFS insistance? Sounds like you want something more like the Hadoop filesystem.