I've rebooted the server (one of ceph's hosts). Started the cluster, but the osd, that's on the host, which was rebooted is down. The osd's # is 2, so when I try:
sudo /etc/init.d/ceph start osd.2
it shows:
Starting ceph (via systemctl): ceph.service.2
, but:
ceph osd tree
shows, that it's down.
When I try:
sudo start ceph-osd id=2
, it errors out:
start: Unable to connect to Upstart: Failed to connect to socket /com/ubuntu/upstart: Connection refused
How can I start that osd?
Best Answer
After about 2 days of trying to resolve this issue and banging my head against the wall, an other person's question to the similar issue on ceph's IRC channel, has led me to a solution:
where # is the number of osd on the host, that was rebooted, so I've used:
Remember to log into and run this command on the node that is down.