Discussion:
Remove and re-add bricks/peers
(too old to reply)
Tom Cannaerts - INTRACTO
2017-07-17 09:55:03 UTC
Permalink
Raw Message
We had some issues with a volume. The volume is a 3 replica volume with 3
gluster 3.5.7 peers. We are now in a situation where only 1 of the 3 nodes
is operational. If we restart the node on one of the other nodes, the
entire volume becomes unresponsive.

After a lot of trial and error, we have come to the conclusion that we do
not wan't to try to rejoin the other 2 nodes in the current form. We would
like to completely remove them from the config of the running node,
entirely reset the config on the nodes itself and then re-add them as if it
was a new node, having it completely sync the volume from the working node.

What would be the correct procedure for this? I assume I can use "gluster
volume remove-brick" to force-remove the failed bricks from the volume and
decrease the replica count, and then use "gluster peer detach" to
force-remove the peers from the config, all on the currently still working
node. But what do I need to do to completely clear the config and data of
the failed peers? The gluster processes are currently not running on these
nodes, but config + data are still present. So basically, I need to be able
to clean them out before restarting them, so that they start in a clean
state and not try to connect/interfere with the currently still working
node.

Thanks,

Tom
--
Met vriendelijke groeten,
Tom Cannaerts


*Service and MaintenanceIntracto - digital agency*

Zavelheide 15 - 2200 Herentals
Tel: +32 14 28 29 29
www.intracto.com


Ben je tevreden over deze e-mail?

<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=5>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=4>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=3>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=2>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=1>
Atin Mukherjee
2017-07-17 14:39:49 UTC
Permalink
Raw Message
That's the way. However I'd like to highlight that you're running a very
old gluster release. We are currently with 3.11 release which is STM and
the long term support is with 3.10. You should consider to upgrade to
atleast 3.10.

On Mon, Jul 17, 2017 at 3:25 PM, Tom Cannaerts - INTRACTO <
Post by Tom Cannaerts - INTRACTO
We had some issues with a volume. The volume is a 3 replica volume with 3
gluster 3.5.7 peers. We are now in a situation where only 1 of the 3 nodes
is operational. If we restart the node on one of the other nodes, the
entire volume becomes unresponsive.
After a lot of trial and error, we have come to the conclusion that we do
not wan't to try to rejoin the other 2 nodes in the current form. We would
like to completely remove them from the config of the running node,
entirely reset the config on the nodes itself and then re-add them as if it
was a new node, having it completely sync the volume from the working node.
What would be the correct procedure for this? I assume I can use "gluster
volume remove-brick" to force-remove the failed bricks from the volume and
decrease the replica count, and then use "gluster peer detach" to
force-remove the peers from the config, all on the currently still working
node. But what do I need to do to completely clear the config and data of
the failed peers? The gluster processes are currently not running on these
nodes, but config + data are still present. So basically, I need to be able
to clean them out before restarting them, so that they start in a clean
state and not try to connect/interfere with the currently still working
node.
Thanks,
Tom
--
Met vriendelijke groeten,
Tom Cannaerts
*Service and MaintenanceIntracto - digital agency*
Zavelheide 15 - 2200 Herentals
Tel: +32 14 28 29 29
www.intracto.com
Ben je tevreden over deze e-mail?
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=5>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=4>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=3>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=2>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=1>
_______________________________________________
Gluster-users mailing list
http://lists.gluster.org/mailman/listinfo/gluster-users
Tom Cannaerts - INTRACTO
2017-07-18 07:18:36 UTC
Permalink
Raw Message
We'll definitely look into upgrading this, but it's a older, legacy system
so we need to see what we can do without breaking it.

Returning to the re-adding question, what steps do I need to do to clear
the config of the failed peers? Do I just wipe the data directory of the
volume, or do I need to clear some other config file/folders as well?

Tom
Post by Atin Mukherjee
That's the way. However I'd like to highlight that you're running a very
old gluster release. We are currently with 3.11 release which is STM and
the long term support is with 3.10. You should consider to upgrade to
atleast 3.10.
On Mon, Jul 17, 2017 at 3:25 PM, Tom Cannaerts - INTRACTO <
Post by Tom Cannaerts - INTRACTO
We had some issues with a volume. The volume is a 3 replica volume with 3
gluster 3.5.7 peers. We are now in a situation where only 1 of the 3 nodes
is operational. If we restart the node on one of the other nodes, the
entire volume becomes unresponsive.
After a lot of trial and error, we have come to the conclusion that we do
not wan't to try to rejoin the other 2 nodes in the current form. We would
like to completely remove them from the config of the running node,
entirely reset the config on the nodes itself and then re-add them as if it
was a new node, having it completely sync the volume from the working node.
What would be the correct procedure for this? I assume I can use "gluster
volume remove-brick" to force-remove the failed bricks from the volume and
decrease the replica count, and then use "gluster peer detach" to
force-remove the peers from the config, all on the currently still working
node. But what do I need to do to completely clear the config and data of
the failed peers? The gluster processes are currently not running on these
nodes, but config + data are still present. So basically, I need to be able
to clean them out before restarting them, so that they start in a clean
state and not try to connect/interfere with the currently still working
node.
Thanks,
Tom
--
Met vriendelijke groeten,
Tom Cannaerts
*Service and MaintenanceIntracto - digital agency*
Zavelheide 15 - 2200 Herentals
Tel: +32 14 28 29 29
www.intracto.com
Ben je tevreden over deze e-mail?
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=5>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=4>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=3>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=2>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=1>
_______________________________________________
Gluster-users mailing list
http://lists.gluster.org/mailman/listinfo/gluster-users
--
Met vriendelijke groeten,
Tom Cannaerts


*Service and MaintenanceIntracto - digital agency*

Zavelheide 15 - 2200 Herentals
Tel: +32 14 28 29 29
www.intracto.com


Ben je tevreden over deze e-mail?

<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=5>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=4>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=3>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=2>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=1>
Atin Mukherjee
2017-07-18 09:01:39 UTC
Permalink
Raw Message
Wipe off /var/lib/glusterd/*

On Tue, 18 Jul 2017 at 12:48, Tom Cannaerts - INTRACTO <
Post by Tom Cannaerts - INTRACTO
We'll definitely look into upgrading this, but it's a older, legacy system
so we need to see what we can do without breaking it.
Returning to the re-adding question, what steps do I need to do to clear
the config of the failed peers? Do I just wipe the data directory of the
volume, or do I need to clear some other config file/folders as well?
Tom
Post by Atin Mukherjee
That's the way. However I'd like to highlight that you're running a very
old gluster release. We are currently with 3.11 release which is STM and
the long term support is with 3.10. You should consider to upgrade to
atleast 3.10.
On Mon, Jul 17, 2017 at 3:25 PM, Tom Cannaerts - INTRACTO <
Post by Tom Cannaerts - INTRACTO
We had some issues with a volume. The volume is a 3 replica volume with
3 gluster 3.5.7 peers. We are now in a situation where only 1 of the 3
nodes is operational. If we restart the node on one of the other nodes, the
entire volume becomes unresponsive.
After a lot of trial and error, we have come to the conclusion that we
do not wan't to try to rejoin the other 2 nodes in the current form. We
would like to completely remove them from the config of the running node,
entirely reset the config on the nodes itself and then re-add them as if it
was a new node, having it completely sync the volume from the working node.
What would be the correct procedure for this? I assume I can use
"gluster volume remove-brick" to force-remove the failed bricks from the
volume and decrease the replica count, and then use "gluster peer detach"
to force-remove the peers from the config, all on the currently still
working node. But what do I need to do to completely clear the config and
data of the failed peers? The gluster processes are currently not running
on these nodes, but config + data are still present. So basically, I need
to be able to clean them out before restarting them, so that they start in
a clean state and not try to connect/interfere with the currently still
working node.
Thanks,
Tom
--
Met vriendelijke groeten,
Tom Cannaerts
*Service and MaintenanceIntracto - digital agency*
Zavelheide 15 - 2200 Herentals
Tel: +32 14 28 29 29
www.intracto.com
Ben je tevreden over deze e-mail?
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=5>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=4>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=3>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=2>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=1>
_______________________________________________
Gluster-users mailing list
http://lists.gluster.org/mailman/listinfo/gluster-users
--
Met vriendelijke groeten,
Tom Cannaerts
*Service and MaintenanceIntracto - digital agency*
Zavelheide 15 - 2200 Herentals
Tel: +32 14 28 29 29
www.intracto.com
Ben je tevreden over deze e-mail?
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=5>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=4>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=3>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=2>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=1>
--
- Atin (atinm)
Loading...