There are many, but I'll be talking about just two.
IOMETER
IOMeter is a benchmarking tool that is very featured. You frequently find it being used by the major benchmarking sites when discussing performance of new storage systems. Anandtech used it for ages, and may still do so as a tool in their overall benchmarking suite.
IOZONE
My personal favorite, IOZone has fewer features than IOMeter but it easier to use. You can get a reasonably good benchmark with very few flags, and can tune it to get exactly what you're looking for. It also has a throughput-test mode where it'll spawn multiple simultaneous testing threads so you can get an idea of the differences between one big process hammering storage, and lots of processes hammering storage.
Both of these tools blow hdparm -t
out of the water when it comes to relevancy. HDParm doesn't deal with enough data to not fully exclude system effects of memory.
When benchmarking virtual spaces, be aware that storage performance is by necessity much less predictable than it is on dedicated storage. Both of these tests can saturate the I/O channel between you and storage, which in turn means that some other VM experiencing high I/O can throw off your numbers, and your benchmarking throw off theirs in return.
IOZone, as I said, is easy to use. This will give you a real solid IO workout:
iozone -a -s 8G
That'll do a full benchmarking series on an 8GB file. You want the file to be larger than RAM in the box so may be smaller or larger depending on what you're doing.
Not that it helps you at this juncture, but this is precisely why you'll never see me advising people use raidz1 - and for mirror sets, if they're using huge disks, often suggesting triple-mirrors.
It is /unlikely in the extreme/ that any act you can take is going to get tank back online. I must start with that, so as not to raise your hopes.
1: Make sure the disks are safe - even if that means unplugging all of them.
2: Update to the latest version of FreeBSD - you want the latest ZFS bits you can get your hands on.
3: Put the original gpt/ta4 (that is supposedly 'OK' and just experiencing read errors) back in the system or into a new system with newer ZFS bits (as well as all the others if you've removed them), boot it, and run, in order until one works (be forewarned - these are not safe, especially the last one, in that in their attempts to recover the system they're likely to roll back and thus lose recently written data):
- zpool import -f tank
- zpool import -fF tank
- zpool import -fFX tank
If all 3 fail, you're outside the realm of "simple" recovery. Some Googling for 'importing bad pools', 'zdb', 'zpool import -F', 'zpool import -X', 'zpool import -T' (danger!), and the like, might provide you some additional blogs and information on recovery attempts made by others, but it's already on very dangerous and potentially further-data-damaging ground at that point and you're rapidly entering territory of paid recovery services (and not from traditional data recovery companies, they have zero expertise with ZFS and will not be of any use to you).
Note: A more precise and 'safer' method would be to 'zpool import -o readonly=on -f -T [txg_id] tank'. However, for this to work, you'd need to use zdb on your own, first, to locate a seemingly healthy recent txg_id, and I'm not prepared to try to explain all that here. Google will be your friend here - take no action until you've read sufficient information to feel somewhat comfortable with what you're doing. Trust no single source.
Note 2: the 'safest' thing to do would be to immediately contact someone capable of ZFS recovery services.
Note 3: the next 'safest' thing to do would be to put the drives in a safe system and dd each entire raw drive to a new disk, as well, giving you, theoretically, identical copies of your disks, but that would mean you'd need a like number of new disks, preferably of similar or identical size/type to the old ones but not strictly necessary. And only then attempt any of the above on one set of the drives while keeping the others aside for safe-keeping.
Best Answer
I found the problem by myself.
I saw an article mentioning CMR (conventional magnetic recording) and SMR (shingled magnetic recording), the latter of which offers decreased write performance. I checked my drives and I realized that I accidentally bought hard drives with SMR :(
I will keep a mirror pool until I replaced the drives with new CMR drives. When I have the new drive I will also use a mirror pool.
Thank you all!