Ubuntu – GlusterFS How to add replica brick to system

glusterfsUbuntu

I am trying to add two bricks to a Gluster Volume. The two new nodes are in the network, and can be verified with:

root /# gluster peer status

Also the volume:

Status of volume: mainvolume
Gluster process                     Port    Online  Pid
------------------------------------------------------------------------------
Brick Node-1:/storage                   49152   Y   1162
NFS Server on localhost                 2049    Y   4004
Self-heal Daemon on localhost               N/A Y   4011
NFS Server on 104.xxx.xxx.xxx           2049    Y   3024
Self-heal Daemon on 104.xxx.xxx.xxx         N/A Y   3031
Brick 45.xx.xx.xx:/storage-pool         N/A N   N/A
NFS Server on 45.xx.xx.xx               N/A N   N/A

There are no active volume tasks

The last brick was accidentally added and needs to be removed. I been looking at the Gluster docs as well as someone's github cheat sheet, but I can't seem to add the two nodes. I started off only wanting to add one node, but then I accidentally removed a node. So now I have two nodes to add. Below is some sample code of what I am trying:

gluster volume add-brick mainvolume replica 2 Node-2:/storage Node-3:/storage
--> volume add-brick: failed: 

Log File:

[2015-09-07 02:57:44.475415] I [input.c:36:cli_batch] 0-: Exiting with: -1
[2015-09-07 03:04:31.229023] I [input.c:36:cli_batch] 0-: Exiting with: -1
[2015-09-07 02:49:54.270231] E [glusterd-brick-ops.c:492:__glusterd_handle_add_brick] 0-management: 
[2015-09-07 02:52:48.909897] E [glusterd-brick-ops.c:454:__glusterd_handle_add_brick] 0-management: Incorrect number of bricks supplied 1 with count 2
[2015-09-07 02:16:46.498829] E [client-handshake.c:1742:client_query_portmap_cbk] 1-mainvolume-client-2: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.

I am add a loss for what to do, my next step is going to be to recreate the network if I can't figure it out.

Best Answer

you can remove the brick Brick 45.xx.xx.xx:/storage-pool if it is not needed. while removing give correct replica number.

gluster volume remove-brick mainvolume replica 1 45.xx.xx.xx/storage-pool force

Then you make sure no extended attributes are on node-2 and node-3, by doing

setfattr -x trusted.glusterfs.volume-id /brick-path
setfattr -x trusted.gfid /brick-path

rm -rf /brick-path/.glusterfs

ie,

setfattr -x trusted.glusterfs.volume-id /storage
setfattr -x trusted.gfid /storage

rm -rf /storage/.glusterfs

on both new nodes.

Then try to add-brick by giving correct replica number, here in this case 3, since we have three bricks including the existing one.

gluster volume add-brick mainvolume replica 3 Node-2:/storage Node-3:/storage force
Related Topic