Discussion:
[Gluster-users] peer detach fails
Jim Kinney
2018-05-30 17:25:03 UTC
Permalink
All,
I added a third peer for a arbiter brick host to replica 2 cluster.
Then I realized I can't use it since it has no infiniband like the
other two hosts (infiniband and ethernet for clients). So I removed the
new arbiter bricks from all of the volumes. However, I can't detach the
peer as it keeps saying there are bricks it hosts. Nothing in volume
status or info shows that host to be involved.
gluster peer detach innuendo forcepeer detach: failed: Brick(s) with
the peer innuendo exist in cluster

The Self-heal daemon is still running on innuendo for each brick.
Should I re-add the arbiter brick and wait for the arbiter heal process to complete? How do I take the arbiter brick out and not break things? It was added using:for fac in <list of volumes>; do gluster volume add-brick ${fac}2 replica 3 arbiter 1 innuendo:/data/glusterfs/${fac}2/brick; done
And then removed using:for fac in <list of volumes>; do gluster volume remove-brick ${fac}2 replica 2 innuendo:/data/glusterfs/${fac}2/brick force; done

Adding a new 3rd full brick host soon to avoid split-brain and trying to get this cleaned up before the new hardware arrives and I start the sync.--
James P. Kinney III

Every time you stop a school, you will have to build a jail. What you
gain at one end you lose at the other. It's like feeding a dog on his
own tail. It won't fatten the dog.
- Speech 11/23/1900 Mark Twain

http://heretothereideas.blogspot.com/
Atin Mukherjee
2018-05-31 09:11:03 UTC
Permalink
All,
I added a third peer for a arbiter brick host to replica 2 cluster. Then I
realized I can't use it since it has no infiniband like the other two hosts
(infiniband and ethernet for clients). So I removed the new arbiter bricks
from all of the volumes. However, I can't detach the peer as it keeps
saying there are bricks it hosts. Nothing in volume status or info shows
that host to be involved.
gluster peer detach innuendo force
peer detach: failed: Brick(s) with the peer innuendo exist in cluster
How did you remove the arbiter bricks from the volumes? I If all the brick
removals were successful, then the 3rd host shouldn't be hosting any
bricks. Could you provide the output of gluster volume info from all the
nodes?
The Self-heal daemon is still running on innuendo for each brick.
for fac in <list of volumes>; do gluster volume add-brick ${fac}2 replica 3 arbiter 1 innuendo:/data/glusterfs/${fac}2/brick; done
for fac in <list of volumes>; do gluster volume remove-brick ${fac}2 replica 2 innuendo:/data/glusterfs/${fac}2/brick force; done
Adding a new 3rd full brick host soon to avoid split-brain and trying to get this cleaned up before the new hardware arrives and I start the sync.
--
James P. Kinney III
Every time you stop a school, you will have to build a jail. What you
gain at one end you lose at the other. It's like feeding a dog on his
own tail. It won't fatten the dog.
- Speech 11/23/1900 Mark Twain
http://heretothereideas.blogspot.com/
_______________________________________________
Gluster-users mailing list
http://lists.gluster.org/mailman/listinfo/gluster-users
Jim Kinney
2018-06-04 15:36:44 UTC
Permalink
AH HA! Found the errant 3rd node. In testing to use corosync for NFS a
lock volume was created and that was still holding a use of the peer.
Dropped that volume and the peer detached as expected.
Post by Atin Mukherjee
Post by Jim Kinney
All,
I added a third peer for a arbiter brick host to replica 2 cluster.
Then I realized I can't use it since it has no infiniband like the
other two hosts (infiniband and ethernet for clients). So I removed
the new arbiter bricks from all of the volumes. However, I can't
detach the peer as it keeps saying there are bricks it hosts.
Nothing in volume status or info shows that host to be involved.
gluster peer detach innuendo force
peer detach: failed: Brick(s) with the peer innuendo exist in cluster
How did you remove the arbiter bricks from the volumes? I If all the
brick removals were successful, then the 3rd host shouldn't be
hosting any bricks. Could you provide the output of gluster volume
info from all the nodes?
Post by Jim Kinney
The Self-heal daemon is still running on innuendo for each brick.
Should I re-add the arbiter brick and wait for the arbiter heal
process to complete? How do I take the arbiter brick out and not
break things? It was added using:for fac in <list of volumes>; do
gluster volume add-brick ${fac}2 replica 3 arbiter 1
innuendo:/data/glusterfs/${fac}2/brick; done
And then removed using:for fac in <list of volumes>; do gluster
volume remove-brick ${fac}2 replica 2
innuendo:/data/glusterfs/${fac}2/brick force; done
Adding a new 3rd full brick host soon to avoid split-brain and
trying to get this cleaned up before the new hardware arrives and I
start the sync.--
James P. Kinney III
Every time you stop a school, you will have to build a jail. What
yougain at one end you lose at the other. It's like feeding a dog
on hisown tail. It won't fatten the dog.- Speech 11/23/1900 Mark
Twain
http://heretothereideas.blogspot.com/
_______________________________________________
Gluster-users mailing list
http://lists.gluster.org/mailman/listinfo/gluster-users
--
James P. Kinney III

Every time you stop a school, you will have to build a jail. What you
gain at one end you lose at the other. It's like feeding a dog on his
own tail. It won't fatten the dog.
- Speech 11/23/1900 Mark Twain

http://heretothereideas.blogspot.com/
Loading...