Discussion:
How to replace a dead brick? (3.6.5)
(too old to reply)
Lindsay Mathieson
2015-10-07 07:06:24 UTC
Permalink
First up - one of the things that concerns me re gluster is the incoherent
state of documentation. The only docs linked on the main webpage are for
3.2 and there is almost nothing on how to handle failure modes such as dead
disks/bricks etc, which is one of glusters primary functions.

My problem - I have a replica 2 volume, 2 nodes, 2 bricks (zfs datasets).

As a test, I destroyed one brick (zfs destroy the dataset).


Can't start the datastore1:

volume start: datastore1: failed: Failed to find brick directory
/glusterdata/datastore1 for volume datastore1. Reason : No such file or
directory

A bit disturbing, I was hoping it would work off the remaining brick.

Can't replace the brick:

gluster volume replace-brick datastore1
vnb.proxmox.softlog:/glusterdata/datastore1
vnb.proxmox.softlog:/glusterdata/datastore1-2 commit force

because the store is not running.

After a lot of googling I found list messages referencing the remove brick
command:
gluster volume remove-brick datastore1 replica 2
vnb.proxmox.softlog:/glusterdata/datastore1c commit force

Fails with the unhelpful error:

wrong brick type: commit, use <HOSTNAME>:<export-dir-abs-path>
Usage: volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ...
<start|stop|status|commit|force>

In the end I destroyed and recreated the volume so I could resume testing,
but I have no idea how I would handle a real failed brick in the future
--
Lindsay
sreejith kb
2015-10-07 11:28:55 UTC
Permalink
Hi,

While you removing a failed brick from four existing cluster volume
try to provide the correct replica number *'n-1' *while removing a brick
from 'n' number of bricks from a gluster volume.

so here you are trying to remove one brick from a volume that contain 2
number of bricks in total, so do like this

gluster volume remove-brick datastore1 replica *1*
vnb.proxmox.softlog:/glusterdata/datastore1c
force.

Follow the same strategy while adding a brick to an existing cluster
volume. provide replica number as 'n+1'

and if you are using a cloned VM that already contains gluster packages
installed on it and have some gluster volume/peer/brick information on it,
then reset those values( including extended attributes ) and then only add
that new node/brick to your existing cluster.

and if you are replacing a failed node with a new one that having the same
IP, then after probing the peer you have to set the volume attributes on it
and restart the gluster-server service, then everything will be fine. If
you have anymore doubt in that feel free to contact.

regards,
sreejith K B,
***@gmail.com
mob:09895315396
Post by Lindsay Mathieson
First up - one of the things that concerns me re gluster is the incoherent
state of documentation. The only docs linked on the main webpage are for
3.2 and there is almost nothing on how to handle failure modes such as dead
disks/bricks etc, which is one of glusters primary functions.
My problem - I have a replica 2 volume, 2 nodes, 2 bricks (zfs datasets).
As a test, I destroyed one brick (zfs destroy the dataset).
volume start: datastore1: failed: Failed to find brick directory
/glusterdata/datastore1 for volume datastore1. Reason : No such file or
directory
A bit disturbing, I was hoping it would work off the remaining brick.
gluster volume replace-brick datastore1
vnb.proxmox.softlog:/glusterdata/datastore1
vnb.proxmox.softlog:/glusterdata/datastore1-2 commit force
because the store is not running.
After a lot of googling I found list messages referencing the remove brick
gluster volume remove-brick datastore1 replica 2
vnb.proxmox.softlog:/glusterdata/datastore1c commit force
wrong brick type: commit, use <HOSTNAME>:<export-dir-abs-path>
Usage: volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ...
<start|stop|status|commit|force>
In the end I destroyed and recreated the volume so I could resume testing,
but I have no idea how I would handle a real failed brick in the future
--
Lindsay
_______________________________________________
Gluster-users mailing list
http://www.gluster.org/mailman/listinfo/gluster-users
--
........................................................................................
Regards,
Sreejith k b
Mob: 09895315396
Lindsay Mathieson
2015-10-07 20:51:49 UTC
Permalink
gluster volume remove-brick datastore1 replica *1* vnb.proxmox.softlog:/glusterdata/datastore1c
force.
Sorry, but I did try it with replica 1 as well, got the same error.

I'll try and reproduce it later and report the exact results.
--
Lindsay
Lindsay Mathieson
2015-10-08 05:54:05 UTC
Permalink
gluster volume remove-brick datastore1 replica *1* vnb.proxmox.softlog:/glusterdata/datastore1c
force.
I think my problem was that I was using "commit force" instead of just
"force", I have it working now. Brain fart on my part, sorry for the
distraction.
--
Lindsay
Joe Julian
2015-10-07 21:19:12 UTC
Permalink
Post by Lindsay Mathieson
First up - one of the things that concerns me re gluster is the
incoherent state of documentation. The only docs linked on the main
webpage are for 3.2 and there is almost nothing on how to handle
failure modes such as dead disks/bricks etc, which is one of glusters
primary functions.
Every link under Documentation at http://gluster.org points to the
gluster.readthedocs.org pages that are all current. Where is this "main
webpage" in which you found links to the old wiki pages?
Post by Lindsay Mathieson
My problem - I have a replica 2 volume, 2 nodes, 2 bricks (zfs datasets).
As a test, I destroyed one brick (zfs destroy the dataset).
volume start: datastore1: failed: Failed to find brick directory
/glusterdata/datastore1 for volume datastore1. Reason : No such file
or directory
A bit disturbing, I was hoping it would work off the remaining brick.
It *is* still working off the remaining brick. It won't start the
missing brick because the missing brick is missing. This is by design.
If, for whatever reason, your brick did not mount, you don't want
gluster to start filling your root device with replication from the
other brick.

I documented this on my blog at
https://joejulian.name/blog/replacing-a-brick-on-glusterfs-340/ which is
still accurate for the latest version.

The bug report I filed for this was closed without resolution. I assume
there's no plans for ever making this easy for administrators.
https://bugzilla.redhat.com/show_bug.cgi?id=991084
Post by Lindsay Mathieson
gluster volume replace-brick datastore1
vnb.proxmox.softlog:/glusterdata/datastore1
vnb.proxmox.softlog:/glusterdata/datastore1-2 commit force
because the store is not running.
After a lot of googling I found list messages referencing the remove
gluster volume remove-brick datastore1 replica 2
vnb.proxmox.softlog:/glusterdata/datastore1c commit force
wrong brick type: commit, use <HOSTNAME>:<export-dir-abs-path>
Usage: volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ...
<start|stop|status|commit|force>
In the end I destroyed and recreated the volume so I could resume
testing, but I have no idea how I would handle a real failed brick in
the future
--
Lindsay
_______________________________________________
Gluster-users mailing list
http://www.gluster.org/mailman/listinfo/gluster-users
Lindsay Mathieson
2015-10-07 23:24:12 UTC
Permalink
Post by Lindsay Mathieson
First up - one of the things that concerns me re gluster is the incoherent
state of documentation. The only docs linked on the main webpage are for
3.2 and there is almost nothing on how to handle failure modes such as dead
disks/bricks etc, which is one of glusters primary functions.
Every link under Documentation at http://gluster.org points to the
gluster.readthedocs.org pages that are all current. Where is this "main
webpage" in which you found links to the old wiki pages?
The Community Page:

http://www.gluster.org/community/documentation/index.php

Which is what came up at the top when i searched for gluster documentation.
Might be an idea to redirect to the main docs from that page.
Post by Lindsay Mathieson
My problem - I have a replica 2 volume, 2 nodes, 2 bricks (zfs datasets).
As a test, I destroyed one brick (zfs destroy the dataset).
volume start: datastore1: failed: Failed to find brick directory
/glusterdata/datastore1 for volume datastore1. Reason : No such file or
directory
A bit disturbing, I was hoping it would work off the remaining brick.
It *is* still working off the remaining brick. It won't start the missing
brick because the missing brick is missing. This is by design. If, for
whatever reason, your brick did not mount, you don't want gluster to start
filling your root device with replication from the other brick.
It wouldn't start the *Datastore*, so all bricks were unavailable. I did
stop the datastore myself in the first place, but I would have expected I
could restart it.



thanks,
--
Lindsay
Lindsay Mathieson
2015-10-08 05:56:47 UTC
Permalink
Post by Joe Julian
I documented this on my blog at
https://joejulian.name/blog/replacing-a-brick-on-glusterfs-340/ which is
still accurate for the latest version.
The bug report I filed for this was closed without resolution. I assume
there's no plans for ever making this easy for administrators.
https://bugzilla.redhat.com/show_bug.cgi?id=991084
Yes, its the sort of workaround one can never remember in an emergency,
you'd have to google it up ...

In the case I was working with, probably easier and quicker to do a
remove-brick/add-brick.

thanks,
--
Lindsay
Pranith Kumar Karampuri
2015-10-08 18:46:28 UTC
Permalink
On 3.7.4, all you need to do is execute "gluster volume replace-brick
<volname> commit force" and rest will be taken care by afr. We are in
the process of coming up with new commands like "gluster volume
reset-brick <volname> start/commit" for wiping/re-formatting of the
disk. So wait just a little longer :-).

Pranith
Post by Joe Julian
I documented this on my blog at
https://joejulian.name/blog/replacing-a-brick-on-glusterfs-340/
which is still accurate for the latest version.
The bug report I filed for this was closed without resolution. I
assume there's no plans for ever making this easy for administrators.
https://bugzilla.redhat.com/show_bug.cgi?id=991084
Yes, its the sort of workaround one can never remember in an
emergency, you'd have to google it up ...
In the case I was working with, probably easier and quicker to do a
remove-brick/add-brick.
thanks,
--
Lindsay
_______________________________________________
Gluster-users mailing list
http://www.gluster.org/mailman/listinfo/gluster-users
Gene Liverman
2015-10-08 19:20:25 UTC
Permalink
So... this kinda applies to me too and I want to get some clarification: I
have the following setup

# gluster volume info

Volume Name: gv0
Type: Replicate
Volume ID: fc50d049-cebe-4a3f-82a6-748847226099
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: eapps-gluster01:/export/sdb1/gv0
Brick2: eapps-gluster02:/export/sdb1/gv0
Brick3: eapps-gluster03:/export/sdb1/gv0
Options Reconfigured:
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
nfs.drc: off

eapps-gluster03 had a hard drive failure so I replaced it, formatted the
drive and now need gluster to be happy again. Gluster put a .glusterfs
folder in /export/sdb1/gv0 but nothing else has shown up and the brick is
offline. I read the docs on replacing a brick but seem to be missing
something and would appreciate some help. Thanks!





--
*Gene Liverman*
Systems Integration Architect
Information Technology Services
University of West Georgia
***@westga.edu

ITS: Making Technology Work for You!
Post by Pranith Kumar Karampuri
On 3.7.4, all you need to do is execute "gluster volume replace-brick
<volname> commit force" and rest will be taken care by afr. We are in the
process of coming up with new commands like "gluster volume reset-brick
<volname> start/commit" for wiping/re-formatting of the disk. So wait just
a little longer :-).
Pranith
Post by Joe Julian
I documented this on my blog at
https://joejulian.name/blog/replacing-a-brick-on-glusterfs-340/ which is
still accurate for the latest version.
The bug report I filed for this was closed without resolution. I assume
there's no plans for ever making this easy for administrators.
https://bugzilla.redhat.com/show_bug.cgi?id=991084
Yes, its the sort of workaround one can never remember in an emergency,
you'd have to google it up ...
In the case I was working with, probably easier and quicker to do a
remove-brick/add-brick.
thanks,
--
Lindsay
_______________________________________________
_______________________________________________
Gluster-users mailing list
http://www.gluster.org/mailman/listinfo/gluster-users
Lindsay Mathieson
2015-10-08 21:32:15 UTC
Permalink
Post by Gene Liverman
eapps-gluster03 had a hard drive failure so I replaced it, formatted the
drive and now need gluster to be happy again. Gluster put a .glusterfs
folder in /export/sdb1/gv0 but nothing else has shown up and the brick is
offline. I read the docs on replacing a brick but seem to be missing
something and would appreciate some help. Thanks!
In my testing here, remove-brick/add-brick did the trick.

volume remove-brick gv0 replica 2 eapps-gluster03:/export/sdb1/gv0 force
volume add-brick gv0 replica 3 eapps-gluster03:/export/sdb1/gv0 [force]
--
Lindsay
Pranith Kumar Karampuri
2015-10-09 05:13:36 UTC
Permalink
Follow the steps at:
http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Managing%20Volumes/#replace-brick

Read the steps at section:
"Replacing brick in Replicate/Distributed Replicate volumes".

We are working on making all the extra steps vanish and just one command
will take care of everything going forward. Will update gluster-users
once that happens.

Pranith
Post by Gene Liverman
So... this kinda applies to me too and I want to get some
clarification: I have the following setup
# gluster volume info
Volume Name: gv0
Type: Replicate
Volume ID: fc50d049-cebe-4a3f-82a6-748847226099
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Brick1: eapps-gluster01:/export/sdb1/gv0
Brick2: eapps-gluster02:/export/sdb1/gv0
Brick3: eapps-gluster03:/export/sdb1/gv0
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
nfs.drc: off
eapps-gluster03 had a hard drive failure so I replaced it, formatted
the drive and now need gluster to be happy again. Gluster put a
.glusterfs folder in /export/sdb1/gv0 but nothing else has shown up
and the brick is offline. I read the docs on replacing a brick but
seem to be missing something and would appreciate some help. Thanks!
--
*Gene Liverman*
Systems Integration Architect
Information Technology Services
University of West Georgia
ITS: Making Technology Work for You!
On Thu, Oct 8, 2015 at 2:46 PM, Pranith Kumar Karampuri
On 3.7.4, all you need to do is execute "gluster volume
replace-brick <volname> commit force" and rest will be taken care
by afr. We are in the process of coming up with new commands like
"gluster volume reset-brick <volname> start/commit" for
wiping/re-formatting of the disk. So wait just a little longer :-).
Pranith
Post by Joe Julian
I documented this on my blog at
https://joejulian.name/blog/replacing-a-brick-on-glusterfs-340/
which is still accurate for the latest version.
The bug report I filed for this was closed without
resolution. I assume there's no plans for ever making this
easy for administrators.
https://bugzilla.redhat.com/show_bug.cgi?id=991084
Yes, its the sort of workaround one can never remember in an
emergency, you'd have to google it up ...
In the case I was working with, probably easier and quicker to do
a remove-brick/add-brick.
thanks,
--
Lindsay
_______________________________________________
Gluster-users mailing list
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
http://www.gluster.org/mailman/listinfo/gluster-users
Joe Julian
2015-10-08 20:02:07 UTC
Permalink
Post by Pranith Kumar Karampuri
On 3.7.4, all you need to do is execute "gluster volume replace-brick
<volname> commit force" and rest will be taken care by afr. We are in
the process of coming up with new commands like "gluster volume
reset-brick <volname> start/commit" for wiping/re-formatting of the
disk. So wait just a little longer :-).
Pranith
Nope.

Volume Name: test
Type: Replicate
Volume ID: 426a1719-7cc2-4dac-97b4-67491679e00e
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: questor:/tmp/foo1.1
Brick2: questor:/tmp/foo1.2


Status of volume: test
Gluster process TCP Port RDMA Port
Online Pid
------------------------------------------------------------------------------
Brick questor:/tmp/foo1.1 49162 0 Y 20825
Brick questor:/tmp/foo1.2 49163 0 Y 20859
NFS Server on localhost N/A N/A N N/A
Self-heal Daemon on localhost N/A N/A Y 20887



[***@questor]# kill 20825
[***@questor]# rm -rf /tmp/foo1.1
[***@questor]# mkdir /tmp/foo1.1
[***@questor]# gluster volume replace-brick test commit force
Usage: volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit
force}
[***@questor]# gluster volume replace-brick test questor:/tmp/foo1.1
questor:/tmp/foo1.1 commit force
volume replace-brick: failed: Brick: questor:/tmp/foo1.1 not available.
Brick may be containing or be contained by an existing brick
Post by Pranith Kumar Karampuri
Post by Joe Julian
I documented this on my blog at
https://joejulian.name/blog/replacing-a-brick-on-glusterfs-340/
which is still accurate for the latest version.
The bug report I filed for this was closed without resolution. I
assume there's no plans for ever making this easy for administrators.
https://bugzilla.redhat.com/show_bug.cgi?id=991084
Yes, its the sort of workaround one can never remember in an
emergency, you'd have to google it up ...
In the case I was working with, probably easier and quicker to do a
remove-brick/add-brick.
thanks,
--
Lindsay
_______________________________________________
Gluster-users mailing list
http://www.gluster.org/mailman/listinfo/gluster-users
Lindsay Mathieson
2015-10-08 21:25:55 UTC
Permalink
Did you try erasing the brick first?

Sent from Mail for Windows 10



From: Joe Julian
Sent: Friday, 9 October 2015 6:02 AM
To: Pranith Kumar Karampuri;Lindsay Mathieson
Cc: gluster-users
Subject: Re: [Gluster-users] How to replace a dead brick? (3.6.5)



On 10/08/2015 11:46 AM, Pranith Kumar Karampuri wrote:
On 3.7.4, all you need to do is execute "gluster volume replace-brick <volname> commit force" and rest will be taken care by afr. We are in the process of coming up with new commands like "gluster volume reset-brick <volname> start/commit" for wiping/re-formatting of the disk. So wait just a little longer :-).

Pranith

Nope.
Volume Name: test
Type: Replicate
Volume ID: 426a1719-7cc2-4dac-97b4-67491679e00e
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: questor:/tmp/foo1.1
Brick2: questor:/tmp/foo1.2


Status of volume: test
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick questor:/tmp/foo1.1                   49162     0          Y       20825
Brick questor:/tmp/foo1.2                   49163     0          Y       20859
NFS Server on localhost                     N/A       N/A        N       N/A 
Self-heal Daemon on localhost               N/A       N/A        Y       20887


[***@questor]# kill 20825
[***@questor]# rm -rf /tmp/foo1.1
[***@questor]# mkdir /tmp/foo1.1
[***@questor]# gluster volume replace-brick test commit force
Usage: volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force}
[***@questor]# gluster volume replace-brick test questor:/tmp/foo1.1 questor:/tmp/foo1.1 commit force
volume replace-brick: failed: Brick: questor:/tmp/foo1.1 not available. Brick may be containing or be contained by an existing brick



On 10/08/2015 11:26 AM, Lindsay Mathieson wrote:

On 8 October 2015 at 07:19, Joe Julian <***@julianfamily.org> wrote:
I documented this on my blog at https://joejulian.name/blog/replacing-a-brick-on-glusterfs-340/ which is still accurate for the latest version.

The bug report I filed for this was closed without resolution. I assume there's no plans for ever making this easy for administrators.
https://bugzilla.redhat.com/show_bug.cgi?id=991084

Yes, its the sort of workaround one can never remember in an emergency, you'd have to google it up ...
In the case I was working with, probably easier and quicker to do a remove-brick/add-brick.
thanks,


--
Lindsay



_______________________________________________
Gluster-users mailing list
Gluster-***@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Pranith Kumar Karampuri
2015-10-09 05:15:22 UTC
Permalink
Post by Joe Julian
Post by Pranith Kumar Karampuri
On 3.7.4, all you need to do is execute "gluster volume replace-brick
<volname> commit force" and rest will be taken care by afr. We are in
the process of coming up with new commands like "gluster volume
reset-brick <volname> start/commit" for wiping/re-formatting of the
disk. So wait just a little longer :-).
Pranith
Nope.
Volume Name: test
Type: Replicate
Volume ID: 426a1719-7cc2-4dac-97b4-67491679e00e
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: questor:/tmp/foo1.1
Brick2: questor:/tmp/foo1.2
Status of volume: test
Gluster process TCP Port RDMA Port
Online Pid
------------------------------------------------------------------------------
Brick questor:/tmp/foo1.1 49162 0 Y 20825
Brick questor:/tmp/foo1.2 49163 0 Y 20859
NFS Server on localhost N/A N/A N N/A
Self-heal Daemon on localhost N/A N/A Y
20887
Usage: volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK>
{commit force}
questor:/tmp/foo1.1 commit force
volume replace-brick: failed: Brick: questor:/tmp/foo1.1 not
available. Brick may be containing or be contained by an existing brick
This is exactly the case that will be covered with "gluster volume
reset-brick <volname> start/commit"

Pranith
Post by Joe Julian
Post by Pranith Kumar Karampuri
Post by Joe Julian
I documented this on my blog at
https://joejulian.name/blog/replacing-a-brick-on-glusterfs-340/
which is still accurate for the latest version.
The bug report I filed for this was closed without resolution. I
assume there's no plans for ever making this easy for
administrators.
https://bugzilla.redhat.com/show_bug.cgi?id=991084
Yes, its the sort of workaround one can never remember in an
emergency, you'd have to google it up ...
In the case I was working with, probably easier and quicker to do a
remove-brick/add-brick.
thanks,
--
Lindsay
_______________________________________________
Gluster-users mailing list
http://www.gluster.org/mailman/listinfo/gluster-users
Lindsay Mathieson
2015-10-08 21:24:13 UTC
Permalink
Very nice!

Any chance of a wheezy repo? ... 😊

Sent from Mail for Windows 10



From: Pranith Kumar Karampuri
Sent: Friday, 9 October 2015 4:46 AM
To: Lindsay Mathieson;Joe Julian
Cc: gluster-users
Subject: Re: [Gluster-users] How to replace a dead brick? (3.6.5)


On 3.7.4, all you need to do is execute "gluster volume replace-brick <volname> commit force" and rest will be taken care by afr. We are in the process of coming up with new commands like "gluster volume reset-brick <volname> start/commit" for wiping/re-formatting of the disk. So wait just a little longer :-).

Pranith
On 10/08/2015 11:26 AM, Lindsay Mathieson wrote:

On 8 October 2015 at 07:19, Joe Julian <***@julianfamily.org> wrote:
I documented this on my blog at https://joejulian.name/blog/replacing-a-brick-on-glusterfs-340/ which is still accurate for the latest version.

The bug report I filed for this was closed without resolution. I assume there's no plans for ever making this easy for administrators.
https://bugzilla.redhat.com/show_bug.cgi?id=991084

Yes, its the sort of workaround one can never remember in an emergency, you'd have to google it up ...
In the case I was working with, probably easier and quicker to do a remove-brick/add-brick.
thanks,
--
Lindsay
Humble Devassy Chirammal
2015-10-08 07:10:09 UTC
Permalink
The steps for replacing the brick is documented and available @
http://gluster.readthedocs.org/en/latest/Administrator%20Guide/Managing%20Volumes/.
Hope it helps.
Post by Lindsay Mathieson
First up - one of the things that concerns me re gluster is the incoherent
state of documentation. The only docs linked on the main webpage are for
3.2 and there is almost nothing on how to handle failure modes such as dead
disks/bricks etc, which is one of glusters primary functions.
My problem - I have a replica 2 volume, 2 nodes, 2 bricks (zfs datasets).
As a test, I destroyed one brick (zfs destroy the dataset).
volume start: datastore1: failed: Failed to find brick directory
/glusterdata/datastore1 for volume datastore1. Reason : No such file or
directory
A bit disturbing, I was hoping it would work off the remaining brick.
gluster volume replace-brick datastore1
vnb.proxmox.softlog:/glusterdata/datastore1
vnb.proxmox.softlog:/glusterdata/datastore1-2 commit force
because the store is not running.
After a lot of googling I found list messages referencing the remove brick
gluster volume remove-brick datastore1 replica 2
vnb.proxmox.softlog:/glusterdata/datastore1c commit force
wrong brick type: commit, use <HOSTNAME>:<export-dir-abs-path>
Usage: volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ...
<start|stop|status|commit|force>
In the end I destroyed and recreated the volume so I could resume testing,
but I have no idea how I would handle a real failed brick in the future
--
Lindsay
_______________________________________________
Gluster-users mailing list
http://www.gluster.org/mailman/listinfo/gluster-users
Continue reading on narkive:
Loading...