Ceph OSD always ‘down’ in Ubuntu 14.04.1

cephubuntu-14.04

I am trying to install and deploy a ceph cluster. As I don't have enough physical servers, I create 4 VMs on my OpenStack using official Ubuntu 14.04 image. I want to deploy a cluster with 1 monitor node and 3 OSD node with ceph version 0.80.7-0ubuntu0.14.04.1. I followed the steps from manual deployment document, and successfully installed the monitor node. However, after the installation of OSD node, it seems that the OSD daemons are running but not correctly report to the monitor node. The osd tree always shows down when I request command ceph --cluster cephcluster1 osd tree.

Following are the commands and corresponding results that may be related to my problem.

root@monitor:/home/ubuntu# ceph --cluster cephcluster1 osd tree
# id    weight  type name       up/down reweight
-1      3       root default
-2      1               host osd1
0       1                       osd.0   down    1
-3      1               host osd2
1       1                       osd.1   down    1
-4      1               host osd3
2       1                       osd.2   down    1

root@monitor:/home/ubuntu# ceph --cluster cephcluster1 -s
    cluster fd78cbf8-8c64-4b12-9cfa-0e75bc6c8d98
     health HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean; 3/3 in osds are down
     monmap e1: 1 mons at {monitor=172.26.111.4:6789/0}, election epoch 1, quorum 0 monitor
     osdmap e21: 3 osds: 0 up, 3 in
      pgmap v22: 192 pgs, 3 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                 192 creating

The configuration file /etc/ceph/cephcluster1.conf on all nodes:

[global]
fsid = fd78cbf8-8c64-4b12-9cfa-0e75bc6c8d98
mon initial members = monitor
mon host = 172.26.111.4
public network = 10.5.0.0/16
cluster network = 172.26.111.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
osd journal size = 1024
filestore xattr use omap = true
osd pool default size = 2
osd pool default min size = 1
osd pool default pg num = 333
osd pool default pgp num = 333
osd crush chooseleaf type = 1

[osd]
osd journal size = 1024

[osd.0]
osd host = osd1

[osd.1]
osd host = osd2

[osd.2]
osd host = osd3

Logs when I start one of the osd daemons through start ceph-osd cluster=cephcluster1 id=x where x is the OSD ID:

/var/log/ceph/cephcluster1-osd.0.log on the OSD node #1:

2015-02-11 09:59:56.626899 7f5409d74800  0 ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3), process ceph-osd, pid 11230
2015-02-11 09:59:56.646218 7f5409d74800  0 genericfilestorebackend(/var/lib/ceph/osd/cephcluster1-0) detect_features: FIEMAP ioctl is supported and appears to work
2015-02-11 09:59:56.646372 7f5409d74800  0 genericfilestorebackend(/var/lib/ceph/osd/cephcluster1-0) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
2015-02-11 09:59:56.658227 7f5409d74800  0 genericfilestorebackend(/var/lib/ceph/osd/cephcluster1-0) detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
2015-02-11 09:59:56.679515 7f5409d74800  0 filestore(/var/lib/ceph/osd/cephcluster1-0) limited size xattrs
2015-02-11 09:59:56.699721 7f5409d74800  0 filestore(/var/lib/ceph/osd/cephcluster1-0) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled
2015-02-11 09:59:56.700107 7f5409d74800 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2015-02-11 09:59:56.700454 7f5409d74800  1 journal _open /var/lib/ceph/osd/cephcluster1-0/journal fd 20: 1073741824 bytes, block size 4096 bytes, directio = 1, aio = 0
2015-02-11 09:59:56.704025 7f5409d74800  1 journal _open /var/lib/ceph/osd/cephcluster1-0/journal fd 20: 1073741824 bytes, block size 4096 bytes, directio = 1, aio = 0
2015-02-11 09:59:56.704884 7f5409d74800  1 journal close /var/lib/ceph/osd/cephcluster1-0/journal
2015-02-11 09:59:56.725281 7f5409d74800  0 genericfilestorebackend(/var/lib/ceph/osd/cephcluster1-0) detect_features: FIEMAP ioctl is supported and appears to work
2015-02-11 09:59:56.725397 7f5409d74800  0 genericfilestorebackend(/var/lib/ceph/osd/cephcluster1-0) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
2015-02-11 09:59:56.736445 7f5409d74800  0 genericfilestorebackend(/var/lib/ceph/osd/cephcluster1-0) detect_features: syncfs(2) syscall fully supported (by glibc and kernel)
2015-02-11 09:59:56.756912 7f5409d74800  0 filestore(/var/lib/ceph/osd/cephcluster1-0) limited size xattrs
2015-02-11 09:59:56.776471 7f5409d74800  0 filestore(/var/lib/ceph/osd/cephcluster1-0) mount: WRITEAHEAD journal mode explicitly enabled in conf
2015-02-11 09:59:56.776748 7f5409d74800 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2015-02-11 09:59:56.776848 7f5409d74800  1 journal _open /var/lib/ceph/osd/cephcluster1-0/journal fd 21: 1073741824 bytes, block size 4096 bytes, directio = 1, aio = 0
2015-02-11 09:59:56.777069 7f5409d74800  1 journal _open /var/lib/ceph/osd/cephcluster1-0/journal fd 21: 1073741824 bytes, block size 4096 bytes, directio = 1, aio = 0
2015-02-11 09:59:56.783019 7f5409d74800  0 <cls> cls/hello/cls_hello.cc:271: loading cls_hello
2015-02-11 09:59:56.783584 7f5409d74800  0 osd.0 11 crush map has features 1107558400, adjusting msgr requires for clients
2015-02-11 09:59:56.783645 7f5409d74800  0 osd.0 11 crush map has features 1107558400 was 8705, adjusting msgr requires for mons
2015-02-11 09:59:56.783687 7f5409d74800  0 osd.0 11 crush map has features 1107558400, adjusting msgr requires for osds
2015-02-11 09:59:56.783750 7f5409d74800  0 osd.0 11 load_pgs
2015-02-11 09:59:56.783831 7f5409d74800  0 osd.0 11 load_pgs opened 0 pgs
2015-02-11 09:59:56.792167 7f53f9b57700  0 osd.0 11 ignoring osdmap until we have initialized
2015-02-11 09:59:56.792334 7f53f9b57700  0 osd.0 11 ignoring osdmap until we have initialized
2015-02-11 09:59:56.792838 7f5409d74800  0 osd.0 11 done with init, starting boot process

/var/log/ceph/ceph-mon.monitor.log on the monitor node:

2015-02-11 09:59:56.593494 7f24cc41d700  0 mon.monitor@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "args": ["host=osd1", "root=default"], "id": 0, "weight": 0.05} v 0) v1
2015-02-11 09:59:56.593955 7f24cc41d700  0 mon.monitor@0(leader).osd e21 create-or-move crush item name 'osd.0' initial_weight 0.05 at location {host=osd1,root=default}

Though ceph OSD shows up normally when changing the environment from Ubuntu 14.04 to CentOS 6.6 with the same installation steps, I still hope to solve this problem since I'm more familiar with Ubuntu than CentOS.

Any suggestion is appreciate. Many thanks!

Best Answer

I experienced the same issue in very much the same environment. I finally tracked down the problem to a messed-up OSD UUID. What gave it away was the following line in the MON log (not the OSD log!):

... mon.minion-001@0(leader).osd e75 preprocess_boot from osd.0 10.208.66.2:6800/3427 clashes with existing osd: different fsid (ours: 71b33e7f-b464-4ba9-96b3-8c814921fea2 ; theirs: 5401be6f-b4ff-42ef-8531-78ee73772d5b)

I resolved the problem by first manually removing the OSD, destroying its file system and manually re-creating it from scratch. How the problem came into existence is something I will subsequently have to track down.

Given the fact that I used puppet to set up the OSDs and the reason for it to mess up is probably related to something particular to my environment means that the issue you are experiencing is likely to be a different one, but maybe you can check your MON log anyway. You will have to enable debugging on the MON, though, by stating something like this in ceph.conf:

[mon]
        debug mon = 9

The message in question is logged at level 7, so this gives you some more details without making everything terribly chatty.

@LoicDachary: wouldn't it make sense to log this error/warning message at level 0? I would certainly have spotted this issue earlier had it been logged right away.

Related Topic