Unable to Mount GlusterFS: Transport Endpoint Not Connected

glusterfs

UPDATE: Upgrade to latest version 5.2 and thus update the log. However, issue stays the same
UPDATE 2: also update client to 5.2, still the same issue.

I have a gluster cluster setup with 3 nodes.

  • server1, 192.168.100.1
  • server2, 192.168.100.2
  • server3, 192.168.100.3

They are connected via an internal network 192.160.100.0/24. However, I want to connect with a client from outside the network using the public ip of one of the servers, which does not work:

sudo mount -t glusterfs x.x.x.x:/datavol /mnt/gluster/

Gives something like this in the log:

[2018-12-15 17:57:29.666819] I [fuse-bridge.c:4153:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.26
[2018-12-15 18:23:47.892343] I [fuse-bridge.c:4259:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.26
[2018-12-15 18:23:47.892375] I [fuse-bridge.c:4870:fuse_graph_sync] 0-fuse: switched to graph 0
[2018-12-15 18:23:47.892475] I [MSGID: 108006] [afr-common.c:5650:afr_local_init] 0-datavol-replicate-0: no subvolumes up
[2018-12-15 18:23:47.892533] E [fuse-bridge.c:4328:fuse_first_lookup] 0-fuse: first lookup on root failed (Transport endpoint is not connected)
[2018-12-15 18:23:47.892651] W [fuse-resolve.c:127:fuse_resolve_gfid_cbk] 0-fuse: 00000000-0000-0000-0000-000000000001: failed to resolve (Transport endpoint is not connected)
[2018-12-15 18:23:47.892668] W [fuse-bridge.c:3250:fuse_statfs_resume] 0-glusterfs-fuse: 2: STATFS (00000000-0000-0000-0000-000000000001) resolution fail
[2018-12-15 18:23:47.892773] W [fuse-bridge.c:889:fuse_attr_cbk] 0-glusterfs-fuse: 3: LOOKUP() / => -1 (Transport endpoint is not connected)
[2018-12-15 18:23:47.894204] W [fuse-bridge.c:889:fuse_attr_cbk] 0-glusterfs-fuse: 4: LOOKUP() / => -1 (Transport endpoint is not connected)
[2018-12-15 18:23:47.894367] W [fuse-bridge.c:889:fuse_attr_cbk] 0-glusterfs-fuse: 5: LOOKUP() / => -1 (Transport endpoint is not connected)
[2018-12-15 18:23:47.916333] I [fuse-bridge.c:5134:fuse_thread_proc] 0-fuse: initating unmount of /mnt/gluster
The message "I [MSGID: 108006] [afr-common.c:5650:afr_local_init] 0-datavol-replicate-0: no subvolumes up" repeated 4 times between [2018-12-15 18:23:47.892475] and [2018-12-15 18:23:47.894347]
[2018-12-15 18:23:47.916555] W [glusterfsd.c:1481:cleanup_and_exit] (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x7494) [0x7f90f2306494] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xfd) [0x5591a51e87ed] -->/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x5591a51e8644] ) 0-: received signum (15), shutting down
[2018-12-15 18:23:47.916573] I [fuse-bridge.c:5897:fini] 0-fuse: Unmounting '/mnt/gluster'.
[2018-12-15 18:23:47.916582] I [fuse-bridge.c:5902:fini] 0-fuse: Closing fuse connection to '/mnt/gluster'.

What I can see is

0-datavol-replicate-0: no subvolumes up

And

0-fuse: 00000000-0000-0000-0000-000000000001: failed to resolve (Transport endpoint is not connected)

Firewall ports (24007-24008,49152-49156) are open on the public network interface.

gluster volume heal datavol info

Brick 192.168.100.1:/data/gluster/brick1
Status: Connected
Number of entries: 0

Brick 192.168.100.2:/data/gluster/brick1
Status: Connected
Number of entries: 0

Brick 192.168.100.3:/data/gluster/brick1
Status: Connected
Number of entries: 0

cluster info:

  1: volume datavol-client-0
  2:     type protocol/client
  3:     option ping-timeout 42
  4:     option remote-host 192.168.100.1
  5:     option remote-subvolume /data/gluster/brick1
  6:     option transport-type socket
  7:     option transport.address-family inet
  8:     option send-gids true
  9: end-volume
 10:  
 11: volume datavol-client-1
 12:     type protocol/client
 13:     option ping-timeout 42
 14:     option remote-host 192.168.100.2
 15:     option remote-subvolume /data/gluster/brick1
 16:     option transport-type socket
 17:     option transport.address-family inet
 18:     option send-gids true
 19: end-volume
 20:  
 21: volume datavol-client-2
 22:     type protocol/client
 23:     option ping-timeout 42
 24:     option remote-host 192.168.100.3
 25:     option remote-subvolume /data/gluster/brick1
 26:     option transport-type socket
 27:     option transport.address-family inet
 28:     option send-gids true
 29: end-volume
 30:  
 31: volume datavol-replicate-0
 32:     type cluster/replicate
 33:     subvolumes datavol-client-0 datavol-client-1 datavol-client-2
 34: end-volume
 35:  
 36: volume datavol-dht
 37:     type cluster/distribute
 38:     option lock-migration off
 39:     subvolumes datavol-replicate-0
 40: end-volume
 41:  
 42: volume datavol-write-behind
 43:     type performance/write-behind
 44:     subvolumes datavol-dht
 45: end-volume
 46:  
 47: volume datavol-read-ahead
 48:     type performance/read-ahead
 49:     subvolumes datavol-write-behind
 50: end-volume
 51:  
 52: volume datavol-readdir-ahead
 53:     type performance/readdir-ahead
 54:     subvolumes datavol-read-ahead
 55: end-volume
 56:  
 57: volume datavol-io-cache
 58:     type performance/io-cache
 59:     subvolumes datavol-readdir-ahead
 60: end-volume
 61:  
 62: volume datavol-quick-read
 63:     type performance/quick-read
 64:     subvolumes datavol-io-cache
 65: end-volume
 66:  
 67: volume datavol-open-behind
 68:     type performance/open-behind
 69:     subvolumes datavol-quick-read
 70: end-volume
 71:  
 72: volume datavol-md-cache
 73:     type performance/md-cache
 74:     subvolumes datavol-open-behind
 75: end-volume
 76:  
 77: volume datavol
 78:     type debug/io-stats
 79:     option log-level INFO
 80:     option latency-measurement off
 81:     option count-fop-hits off
 82:     subvolumes datavol-md-cache
 83: end-volume
 84:  
 85: volume meta-autoload
 86:     type meta
 87:     subvolumes datavol
 88: end-volume

gluster peer status:

root@server1 /data # gluster peer status 
Number of Peers: 2

Hostname: 192.168.100.2
Uuid: 0cb2383e-906d-4ca6-97ed-291b04b4fd10
State: Peer in Cluster (Connected)

Hostname: 192.168.100.3
Uuid: d2d9e82f-2fb6-4f27-8fd0-08aaa8409fa9
State: Peer in Cluster (Connected)

gluster volume status

Status of volume: datavol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 192.168.100.1:/data/gluster/brick1    49152     0          Y       13519
Brick 192.168.100.2:/data/gluster/brick1    49152     0          Y       30943
Brick 192.168.100.3:/data/gluster/brick1    49152     0          Y       24616
Self-heal Daemon on localhost               N/A       N/A        Y       3282 
Self-heal Daemon on 192.168.100.2           N/A       N/A        Y       18987
Self-heal Daemon on 192.168.100.3           N/A       N/A        Y       24638

Task Status of Volume datavol

What do I miss?

Best Answer

I have the same problem.

Did you see? https://bugzilla.redhat.com/show_bug.cgi?id=1659824

Using the "IP" seems to be not "good" in GlusterFS, because the client relies on the remote-host address in the volume information from the server. If the server cannot reach enough Gluster nodes, the volume information for other nodes cannot be used. See https://unix.stackexchange.com/questions/213705/glusterfs-how-to-failover-smartly-if-a-mounted-server-is-failed

So - the problem is - the mount point reaches the node1, reads the volume informations (see /var/log/glusterfs/<volume>.log). There is an information about the other nodes in option remote-host). The client then tries to connect to that nodes on the private IP - and that fails (in my case). I assume, your public client cannot reach the private IPs - that is the problem behind Transport endpoint is not connected.

Solution A - using hostnames instead of IPs inside the Gluster cluster would work, because you could create aliases in /etc/hosts for all servers. But that means - the Gluster must be rebuilded to use DNS names (which is the 192-IP inside the Gluster nodes and the public IP on your client). I didn't try to switch from IP based Gluster to DNS based Gluster (especially on production?).

Solution B in the RH bugzilla is unclear to me. I don't understand, what should be in glusterfs -f$local-volfile $mountpoint - especially what is the real mountpoint option to ignore remote-host and what do they mean with vol-file. There is a response in the second post on SE. I think, this is the answer, but I didn't test that yet.

So - I think, this isn't a bug but a documentation gap. The information for building a volume (brick host names) is used inside clients to connect to other nodes then specified in the mountpoint options.

Related Topic