Discussion:
[Gluster-users] broken gluster config
Diego Remolina
2018-05-10 09:31:27 UTC
Permalink
https://docs.gluster.org/en/v3/Troubleshooting/resolving-splitbrain/

Hopefully the link above will help you fix it.

Diego
[trying to read,
I cant understand what is wrong?
Brick glusterp1:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
Brick glusterp2:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
Brick glusterp3:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
getfattr: Removing leading '/' from absolute path names
# file: bricks/brick1/gv0
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x000000010000000000000000ffffffff
trusted.glusterfs.volume-id=0xcfceb3535f0e4cf18b533ccfb1f091d3
Volume vol does not exist
Volume Name: gv0
Type: Replicate
Volume ID: cfceb353-5f0e-4cf1-8b53-3ccfb1f091d3
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Brick1: glusterp1:/bricks/brick1/gv0
Brick2: glusterp2:/bricks/brick1/gv0
Brick3: glusterp3:/bricks/brick1/gv0
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
================
getfattr: Removing leading '/' from absolute path names
# file: bricks/brick1/gv0
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x000000010000000000000000ffffffff
trusted.glusterfs.volume-id=0xcfceb3535f0e4cf18b533ccfb1f091d3
================
getfattr: Removing leading '/' from absolute path names
# file: bricks/brick1/gv0
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x000000010000000000000000ffffffff
trusted.glusterfs.volume-id=0xcfceb3535f0e4cf18b533ccfb1f091d3
Whatever repair happened has now finished but I still have this,
I cant find anything so far telling me how to fix it. Looking at
http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/heal-info-and-split-brain-resolution/
I cant determine what file? dir gvo? is actually the issue.
Brick glusterp1:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693>
Status: Connected
Number of entries in split-brain: 1
Brick glusterp2:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693>
Status: Connected
Number of entries in split-brain: 1
Brick glusterp3:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693>
Status: Connected
Number of entries in split-brain: 1
also I have this "split brain"?
Brick glusterp1:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
Brick glusterp2:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
/glusterp1/images/centos-server-001.qcow2
/glusterp1/images/kubernetes-template.qcow2
/glusterp1/images/kworker01.qcow2
/glusterp1/images/kworker02.qcow2
Status: Connected
Number of entries: 5
Brick glusterp3:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
/glusterp1/images/centos-server-001.qcow2
/glusterp1/images/kubernetes-template.qcow2
/glusterp1/images/kworker01.qcow2
/glusterp1/images/kworker02.qcow2
Status: Connected
Number of entries: 5
gluster v status
Status of volume: gv0
Gluster process TCP Port RDMA Port
Online Pid
------------------------------------------------------------------------------
Brick glusterp1:/bricks/brick1/gv0 49152 0 Y
5229
Brick glusterp2:/bricks/brick1/gv0 49152 0 Y
2054
Brick glusterp3:/bricks/brick1/gv0 49152 0 Y
2110
Self-heal Daemon on localhost N/A N/A Y
5219
Self-heal Daemon on glusterp2 N/A N/A Y
1943
Self-heal Daemon on glusterp3 N/A N/A Y
2067
Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks
total 2877064
-rw-------. 2 root root 107390828544 May 10 12:18
centos-server-001.qcow2
-rw-r--r--. 2 root root 0 May 8 14:32 file1
-rw-r--r--. 2 root root 0 May 9 14:41 file1-1
-rw-------. 2 root root 85912715264 May 10 12:18
kubernetes-template.qcow2
-rw-------. 2 root root 0 May 10 12:08 kworker01.qcow2
-rw-------. 2 root root 0 May 10 12:08 kworker02.qcow2
while,
total 11209084
-rw-------. 2 root root 107390828544 May 9 14:45
centos-server-001.qcow2
-rw-r--r--. 2 root root 0 May 8 14:32 file1
-rw-r--r--. 2 root root 0 May 9 14:41 file1-1
-rw-------. 2 root root 85912715264 May 9 15:59
kubernetes-template.qcow2
-rw-------. 2 root root 3792371712 May 9 16:15 kworker01.qcow2
-rw-------. 2 root root 3792371712 May 10 11:20 kworker02.qcow2
So some files have re-synced but not the kworker machines network
activity has stopped.
Show us output from: gluster v status
It should be easy to fix. Stop gluster daemon on that node, mount the
brick, start gluster daemon again.
Check: gluster v status
Does it show the brick up?
HTH,
Diego
Hi,
I have 3 Centos7.4 machines setup as a 3 way raid 1.
Due to an oopsie on my part for glusterp1 /bricks/brick1/gv0 didnt
mount on boot and as a result its empty.
Meanwhile I have data on glusterp2 /bricks/brick1/gv0 and glusterp3
/bricks/brick1/gv0 as expected.
Is there a way to get glusterp1's gv0 to sync off the other 2? there
must be but,
I have looked at the gluster docs and I cant find anything about
repairing resyncing?
Where am I meant to look for such info?
thanks
_______________________________________________
Gluster-users mailing list
http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
http://lists.gluster.org/mailman/listinfo/gluster-users
Loading...