-
There were 2 disks in the pool /dev/sde /dev/sdf
-
The disk /dev/sde was damaged, I excluded it from the pool and replaced it with a new disk. The raid has become degraded.
-
After adding a new /dev/sde disk to the pool, I got the following configuration:
zpool status
pool: rpool
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://zfsonlinux.org/msg/ZFS-8000-4J
scan: scrub repaired 0B in 0h25m with 0 errors on Wed Sep 2 18:32:39 2020
config:
NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
old UNAVAIL 0 0 0
sdf2 ONLINE 0 0 0
sde2 ONLINE 0 0 0
Replacing, offline, deleting "old" always get an error:
zpool replace rpool old
cannot open 'old': no such device in /dev
must be a full path or shorthand device name
In file /etc/zfs/zpool.cache I see drive /dev/sde2/old
How do I remove the old disk without restarting the server and deleting the array (the array is /)?
root@v05:/# zpool replace rpool old sde2
invalid vdev specification
use '-f' to override the following errors:
/dev/sde2 is part of active pool 'rpool'
root@v05:/# zpool replace -f rpool old sde2
invalid vdev specification
the following errors must be manually repaired:
/dev/sde2 is part of active pool 'rpool'
sde already rpool member.
Need remove old (/dev/sde/old) disk from pool.
No error when adding the new disk, did everything exactly as you wrote.
Best Answer
You probably did an error when adding the new disk: you issued
zpool add rpool <newdisk>
, but you had to replace the failed disk. In other words, you had to either:zpool replace rpool <olddisk> <newdisk>
zpool detach rpool <olddisk>; zpool attach rpool sdf
(sdf
being the other mirror leg).Notice how I wrote
attach
, while you probably usedadd
in your zpool command.How can you fix the issue? With ZFS 0.7.x you are out of luck, as no data vdev can be removed after being added. With ZFS 0.8.x you can remove it so, if you are running ZFS 0.7.x, you need to update to 0.8.x as first step. Then you must issue the above command to replace the failed disk.