Synology Storage Manager – Rejects Healthy Disks as Critical Due to Reset Commands

hard drivesatastoragesynology

I am trying to install a new HDD in my DS1515+. The hard drives are brand new Seagate Exos X.

I have bought 3 of these drives, and 1 of them works fine and is now part of my storage pool, but the other 2 gives me an error when I install them in the DS. The error is "Multiple reset command errors have occurred…".

From the Storage Manager, I cannot continue and the system won't let me intialize the disk to make it part of a storage pool. I've tried secure erasing one of the disks, with no luck.

I've looked in the dmesg output from the linux terminal of the DS, with no errors for the disks that cause me trouble.

But I can see in all the extended views that no errors regarding reset/reidentification/reconnection etc have occurred. So why is Storage Manager stuck on the Critical status?

Both disks work completely fine when connected to another computer, so I believe they are physically 100% OK.

I suspect that the disks are rejected by the DS because of some historic data. First, I tried connecting the two problematic disks using the 2 ESATA ports of the DS1515+, but the disks did never appear in the storage manager, and I saw some reset/connection errors in the dmesg log (using the linux terminal). This was probably because of older unstable ESATA cables. But my theory is that these events when the disks were connected by ESATA, has caused the DS to "blacklist" these disks, as they have once caused "reset commands" due to flaky cables.

How can I force the DS to accept these disk and let me use them? Is there a way to reset any historic knowledge about these disks, and let the DS re-evaluate them?

enter image description here
enter image description here
enter image description here
enter image description here
enter image description here
enter image description here

Best Answer

I managed to fix this myself. I was right about the assumption that the unstable ESATA connection a few days ago, made my NAS believe that the drive was faulty.

What I did to fix it:

root@diskstation:/# grep -r ZR52AGQE /var/*
...
Binary file /var/log/synolog/.SYNODISKDB matches
Binary file /var/log/synolog/.SYNODISKHEALTHDB matches
Binary file /var/log/synolog/.SYNODISKTESTDB matches
...

I dumped these databases to SQL files using the sqlite3 commmand line tool, and imported the dumps in a DB on my laptop, and inspected the content.

Then I could see the events of the connections resets from the ESATA connection a few days ago int the logs table

enter image description here

Then I did this

root@diskstation:/var/log/synolog# sqlite3 .SYNODISKDB

DELETE FROM logs WHERE serial = 'ZR51TPNF';
DELETE FROM logs WHERE serial ='ZR52AGQE';

And rebooted the NAS - now it's rebuilding my storage pool to the new disk :)