Discussion:
[Gluster-users] Upgrade to 4.1.2 geo-replication does not work
Krishna Verma
2018-08-28 07:44:18 UTC
Permalink
Hi

I am getting below error in gsyncd.log

OSError: libgfchangelog.so: cannot open shared object file: No such file or directory
[2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] RepceClient: call failed call=26469:139794524604224:1535440781.44 method=init error=OSError
[2018-08-28 07:19:41.447041] E [syncdutils(worker /data/gluster/gv0):330:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such file or directory
[2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
[2018-08-28 07:19:42.440184] I [monitor(monitor):272:monitor] Monitor: worker died in startup phase brick=/data/gluster/gv0

Below is my file location:

/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1

What I can do to fix it ?

/Krish
Sunny Kumar
2018-08-28 09:24:46 UTC
Permalink
Hi Krish,

You can run -
#ldconfig /usr/lib

If that still does not solves your problem you can do manual symlink like -
ln -s /usr/lib64/libgfchangelog.so.1 /usr/lib64/libgfchangelog.so

Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such file or directory
[2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] RepceClient: call failed call=26469:139794524604224:1535440781.44 method=init error=OSError
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such file or directory
[2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
[2018-08-28 07:19:42.440184] I [monitor(monitor):272:monitor] Monitor: worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
Krishna Verma
2018-08-28 09:31:58 UTC
Permalink
Hi Sunny,

Thanks for your response, I tried both, but still I am getting the same error.


[***@noi-poc-gluster ~]# ldconfig /usr/lib
[***@noi-poc-gluster ~]#

[***@noi-poc-gluster ~]# ln -s /usr/lib64/libgfchangelog.so.1 /usr/lib64/libgfchangelog.so
[***@noi-poc-gluster ~]# ls -l /usr/lib64/libgfchangelog.so
lrwxrwxrwx. 1 root root 30 Aug 28 14:59 /usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1

/Krishna

-----Original Message-----
From: Sunny Kumar <***@redhat.com>
Sent: Tuesday, August 28, 2018 2:55 PM
To: Krishna Verma <***@cadence.com>
Cc: gluster-***@gluster.org
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


Hi Krish,

You can run -
#ldconfig /usr/lib

If that still does not solves your problem you can do manual symlink like - ln -s /usr/lib64/libgfchangelog.so.1 /usr/lib64/libgfchangelog.so

Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such file or directory
[2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] RepceClient: call failed call=26469:139794524604224:1535440781.44 method=init error=OSError
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such file or directory
[2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB70
1mkxoNZWYvU7XXug&e=
Sunny Kumar
2018-08-28 09:46:47 UTC
Permalink
With same log message ?

Can you please verify that
https://review.gluster.org/#/c/glusterfs/+/20207/ patch is present if
not can you please apply that.
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.

Please share the log also.

Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the same error.
lrwxrwxrwx. 1 root root 30 Aug 28 14:59 /usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink like - ln -s /usr/lib64/libgfchangelog.so.1 /usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such file or directory
[2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] RepceClient: call failed call=26469:139794524604224:1535440781.44 method=init error=OSError
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such file or directory
[2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB70
1mkxoNZWYvU7XXug&e=
Krishna Verma
2018-08-28 10:15:31 UTC
Permalink
Hi Sunny,

I did the mentioned changes given in patch and restart the session for geo-replication. But again same errors in the logs.

I have attaching the config files and logs here.


[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep delete
Deleting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep create push-pem force
Creating geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep start
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep start
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
[***@gluster-poc-noida ~]# vim /usr/libexec/glusterfs/python/syncdaemon/repce.py
[***@gluster-poc-noida ~]# systemctl restart glusterd
[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
[***@gluster-poc-noida ~]#


/Krishna.

-----Original Message-----
From: Sunny Kumar <***@redhat.com>
Sent: Tuesday, August 28, 2018 3:17 PM
To: Krishna Verma <***@cadence.com>
Cc: gluster-***@gluster.org
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


With same log message ?

Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not can you please apply that.
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0 /usr/lib64/libgfchangelog.so.

Please share the log also.

Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the same error.
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such file or directory
[2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] RepceClient: call failed call=26469:139794524604224:1535440781.44 method=init error=OSError
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such file or directory
[2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
Sunny Kumar
2018-08-28 10:22:40 UTC
Permalink
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for geo-replication. But again same errors in the logs.
I have attaching the config files and logs here.
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Deleting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Creating geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not can you please apply that.
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0 /usr/lib64/libgfchangelog.so.
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the same error.
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] RepceClient: call failed call=26469:139794524604224:1535440781.44 method=init error=OSError
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
Sunny Kumar
2018-08-28 10:26:26 UTC
Permalink
Post by Krishna Verma
Hi Sunny,
libgfxdr.so.0 (libc6,x86-64) => /lib64/libgfxdr.so.0
libgfrpc.so.0 (libc6,x86-64) => /lib64/libgfrpc.so.0
libgfortran.so.3 (libc6,x86-64) => /lib64/libgfortran.so.3
libgfortran.so.1 (libc6,x86-64) => /lib64/libgfortran.so.1
libgfdb.so.0 (libc6,x86-64) => /lib64/libgfdb.so.0
libgfchangelog.so.0 (libc6,x86-64) => /lib64/libgfchangelog.so.0
This is linked properly so check for geo-rep It should be working
Post by Krishna Verma
libgfapi.so.0 (libc6,x86-64) => /lib64/libgfapi.so.0
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:53 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for geo-replication. But again same errors in the logs.
I have attaching the config files and logs here.
gluster-poc-sj::glusterep stop Stopping geo-replication session
between glusterep & gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep delete Deleting geo-replication session
between glusterep & gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep create push-pem force Creating
geo-replication session between glusterep & gluster-poc-sj::glusterep
geo-replication glusterep gluster-poc-sj::glusterep start
geo-replication start failed for glusterep gluster-poc-sj::glusterep
volume geo-replication glusterep gluster-poc-sj::glusterep start
geo-replication start failed for glusterep gluster-poc-sj::glusterep
/usr/libexec/glusterfs/python/syncdaemon/repce.py
gluster-poc-sj::glusterep start Starting geo-replication session
between glusterep & gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not can you please apply that.
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0 /usr/lib64/libgfchangelog.so.
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the same error.
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No
such file or directory
[2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] RepceClient: call failed call=26469:139794524604224:1535440781.44 method=init error=OSError
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py",
line 1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No
such file or directory
[2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster
.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PG
HM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m
=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCd
LB
70
1mkxoNZWYvU7XXug&e=
Krishna Verma
2018-08-28 10:30:10 UTC
Permalink
No, its again goes faulty after rebuild the session. And still getting the errors in the logs for lib

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
[***@gluster-poc-noida ~]#

/krishna

-----Original Message-----
From: Sunny Kumar <***@redhat.com>
Sent: Tuesday, August 28, 2018 3:56 PM
To: Krishna Verma <***@cadence.com>
Cc: gluster-***@gluster.org
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Post by Krishna Verma
Hi Sunny,
libgfxdr.so.0 (libc6,x86-64) => /lib64/libgfxdr.so.0
libgfrpc.so.0 (libc6,x86-64) => /lib64/libgfrpc.so.0
libgfortran.so.3 (libc6,x86-64) => /lib64/libgfortran.so.3
libgfortran.so.1 (libc6,x86-64) => /lib64/libgfortran.so.1
libgfdb.so.0 (libc6,x86-64) => /lib64/libgfdb.so.0
libgfchangelog.so.0 (libc6,x86-64) =>
/lib64/libgfchangelog.so.0
This is linked properly so check for geo-rep It should be working
Post by Krishna Verma
libgfapi.so.0 (libc6,x86-64) => /lib64/libgfapi.so.0
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:53 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for geo-replication. But again same errors in the logs.
I have attaching the config files and logs here.
gluster-poc-sj::glusterep stop Stopping geo-replication session
between glusterep & gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep delete Deleting geo-replication session
between glusterep & gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep create push-pem force Creating
geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep start geo-replication start failed for
glusterep gluster-poc-sj::glusterep geo-replication command failed
gluster-poc-sj::glusterep start geo-replication start failed for
glusterep gluster-poc-sj::glusterep geo-replication command failed
/usr/libexec/glusterfs/python/syncdaemon/repce.py
gluster-poc-sj::glusterep start Starting geo-replication session
between glusterep & gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not can you please apply that.
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0 /usr/lib64/libgfchangelog.so.
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the same error.
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28
14:59 /usr/lib64/libgfchangelog.so ->
/usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual
symlink like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No
such file or directory
[2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] RepceClient: call failed call=26469:139794524604224:1535440781.44 method=init error=OSError
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py",
line 72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py",
line 1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No
such file or directory
[2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.glust
er
.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6
PG
HM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE
&m
=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpep
Cd
LB
70
1mkxoNZWYvU7XXug&e=
Kotresh Hiremath Ravishankar
2018-08-28 10:30:08 UTC
Permalink
Hi Krishna,

Since your libraries are in /usr/lib64, you should be doing

#ldconfig /usr/lib64

Confirm that below command lists the library

#ldconfig -p | grep libgfchangelog
Post by Sunny Kumar
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for
geo-replication. But again same errors in the logs.
Post by Krishna Verma
I have attaching the config files and logs here.
gluster-poc-sj::glusterep stop
Post by Krishna Verma
Stopping geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep delete
Post by Krishna Verma
Deleting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep create push-pem force
Post by Krishna Verma
Creating geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep start
Post by Krishna Verma
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
gluster-poc-sj::glusterep start
Post by Krishna Verma
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
syncdaemon/repce.py
gluster-poc-sj::glusterep start
Post by Krishna Verma
Starting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
Post by Krishna Verma
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
Post by Krishna Verma
------------------------------------------------------------
------------------------------------------------------------
-----------------------------
Post by Krishna Verma
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
Post by Krishna Verma
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
Post by Krishna Verma
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
Post by Krishna Verma
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.
gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=
aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_
6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=
fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not
can you please apply that.
Post by Krishna Verma
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.
Post by Krishna Verma
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the
same error.
Post by Krishna Verma
Post by Krishna Verma
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker
/data/gluster/gv0):197:__call__] RepceClient: call failed
call=26469:139794524604224:1535440781.44 method=init
error=OSError
Post by Krishna Verma
Post by Krishna Verma
Post by Krishna Verma
[2018-08-28 07:19:41.447041] E [syncdutils(worker
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent
/data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching
EOF.
Post by Krishna Verma
Post by Krishna Verma
Post by Krishna Verma
[2018-08-28 07:19:42.440184] I [monitor(monitor):272:monitor]
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Thanks and Regards,
Kotresh H R
Krishna Verma
2018-08-28 10:34:21 UTC
Permalink
Hi Kotresh,

Thanks for the response, I did that also but nothing changed.

[***@gluster-poc-noida ~]# ldconfig /usr/lib64
[***@gluster-poc-noida ~]# ldconfig -p | grep libgfchangelog
libgfchangelog.so.0 (libc6,x86-64) => /usr/lib64/libgfchangelog.so.0
[***@gluster-poc-noida ~]#

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
[***@gluster-poc-noida ~]#

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com>
Sent: Tuesday, August 28, 2018 4:00 PM
To: Sunny Kumar <***@redhat.com>
Cc: Krishna Verma <***@cadence.com>; Gluster Users <gluster-***@gluster.org>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog


On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar <***@redhat.com<mailto:***@redhat.com>> wrote:
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for geo-replication. But again same errors in the logs.
I have attaching the config files and logs here.
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Deleting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Creating geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not can you please apply that.
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0 /usr/lib64/libgfchangelog.so.
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the same error.
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] RepceClient: call failed call=26469:139794524604224:1535440781.44 method=init error=OSError
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
Gluster-***@gluster.org<mailto:Gluster-***@gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-users<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
Kotresh Hiremath Ravishankar
2018-08-28 10:52:04 UTC
Permalink
Hi Krishna,

As per the output shared, I don't see the file "libgfchangelog.so" which is
what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is
present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"

It should be something like below.

#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so ->
libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 ->
libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1
Post by Krishna Verma
Hi Kotresh,
Thanks for the response, I did that also but nothing changed.
libgfchangelog.so.0 (libc6,x86-64) =>
/usr/lib64/libgfchangelog.so.0
gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
-----------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty
N/A N/A
/Krishna
*Sent:* Tuesday, August 28, 2018 4:00 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for
geo-replication. But again same errors in the logs.
Post by Krishna Verma
I have attaching the config files and logs here.
gluster-poc-sj::glusterep stop
Post by Krishna Verma
Stopping geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep delete
Post by Krishna Verma
Deleting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep create push-pem force
Post by Krishna Verma
Creating geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep start
Post by Krishna Verma
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
gluster-poc-sj::glusterep start
Post by Krishna Verma
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
syncdaemon/repce.py
gluster-poc-sj::glusterep start
Post by Krishna Verma
Starting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
Post by Krishna Verma
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
Post by Krishna Verma
------------------------------------------------------------
------------------------------------------------------------
-----------------------------
Post by Krishna Verma
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
Post by Krishna Verma
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
Post by Krishna Verma
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
Post by Krishna Verma
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.
gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=
aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_
6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=
fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not
can you please apply that.
Post by Krishna Verma
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.
Post by Krishna Verma
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the
same error.
Post by Krishna Verma
Post by Krishna Verma
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker
/data/gluster/gv0):197:__call__] RepceClient: call failed
call=26469:139794524604224:1535440781.44 method=init
error=OSError
Post by Krishna Verma
Post by Krishna Verma
Post by Krishna Verma
[2018-08-28 07:19:41.447041] E [syncdutils(worker
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent
/data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching
EOF.
Post by Krishna Verma
Post by Krishna Verma
Post by Krishna Verma
[2018-08-28 07:19:42.440184] I [monitor(monitor):272:monitor]
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
Krishna Verma
2018-08-28 10:58:12 UTC
Permalink
Hi Kotresh,

I created the links before. Below is the detail.

[***@gluster-poc-noida ~]# ls -l /usr/lib64 | grep libgfch
lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]# locate libgfchangelog.so
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]#

Is it looks good what we exactly need or di I need to create any more link or How to get “libgfchangelog.so” file if missing.

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com>
Sent: Tuesday, August 28, 2018 4:22 PM
To: Krishna Verma <***@cadence.com>
Cc: Sunny Kumar <***@redhat.com>; Gluster Users <gluster-***@gluster.org>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which is what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"

It should be something like below.

#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so -> libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1

On Tue, Aug 28, 2018 at 4:04 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Thanks for the response, I did that also but nothing changed.

[***@gluster-poc-noida ~]# ldconfig /usr/lib64
[***@gluster-poc-noida ~]# ldconfig -p | grep libgfchangelog
libgfchangelog.so.0 (libc6,x86-64) => /usr/lib64/libgfchangelog.so.0
[***@gluster-poc-noida ~]#

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
[***@gluster-poc-noida ~]#

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:00 PM
To: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>
Cc: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog


On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar <***@redhat.com<mailto:***@redhat.com>> wrote:
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for geo-replication. But again same errors in the logs.
I have attaching the config files and logs here.
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Deleting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Creating geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not can you please apply that.
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0 /usr/lib64/libgfchangelog.so.
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the same error.
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] RepceClient: call failed call=26469:139794524604224:1535440781.44 method=init error=OSError
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
Gluster-***@gluster.org<mailto:Gluster-***@gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-users<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
Kotresh Hiremath Ravishankar
2018-08-29 05:17:22 UTC
Permalink
Answer inline
Post by Krishna Verma
Hi Kotresh,
I created the links before. Below is the detail.
lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so ->
/usr/lib64/libgfchangelog.so.1
The link created is pointing to wrong library. Please fix this

#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so

lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 ->
Post by Krishna Verma
libgfchangelog.so.0.0.1
-rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
Is it looks good what we exactly need or di I need to create any more link
or How to get “libgfchangelog.so” file if missing.
/Krishna
*Sent:* Tuesday, August 28, 2018 4:22 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which
is what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is
present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"
It should be something like below.
#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la
<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so -> libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1
Hi Kotresh,
Thanks for the response, I did that also but nothing changed.
libgfchangelog.so.0 (libc6,x86-64) =>
/usr/lib64/libgfchangelog.so.0
gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
-----------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty
N/A N/A
/Krishna
*Sent:* Tuesday, August 28, 2018 4:00 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for
geo-replication. But again same errors in the logs.
Post by Krishna Verma
I have attaching the config files and logs here.
gluster-poc-sj::glusterep stop
Post by Krishna Verma
Stopping geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep delete
Post by Krishna Verma
Deleting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep create push-pem force
Post by Krishna Verma
Creating geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep start
Post by Krishna Verma
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
gluster-poc-sj::glusterep start
Post by Krishna Verma
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
syncdaemon/repce.py
gluster-poc-sj::glusterep start
Post by Krishna Verma
Starting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
Post by Krishna Verma
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
Post by Krishna Verma
------------------------------------------------------------
------------------------------------------------------------
-----------------------------
Post by Krishna Verma
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
Post by Krishna Verma
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
Post by Krishna Verma
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
Post by Krishna Verma
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.
gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=
aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_
6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=
fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not
can you please apply that.
Post by Krishna Verma
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.
Post by Krishna Verma
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the
same error.
Post by Krishna Verma
Post by Krishna Verma
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker
/data/gluster/gv0):197:__call__] RepceClient: call failed
call=26469:139794524604224:1535440781.44 method=init
error=OSError
Post by Krishna Verma
Post by Krishna Verma
Post by Krishna Verma
[2018-08-28 07:19:41.447041] E [syncdutils(worker
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent
/data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching
EOF.
Post by Krishna Verma
Post by Krishna Verma
Post by Krishna Verma
[2018-08-28 07:19:42.440184] I [monitor(monitor):272:monitor]
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
Krishna Verma
2018-08-29 18:47:00 UTC
Permalink
Hi Kotresh,

Thank you so much for you input. Geo-replication is now showing “Active” atleast for 1 master node. But its still at faulty state for the 2nd master server.

Below is the detail.

[***@gluster-poc-noida glusterfs]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-29 23:56:06
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A


[***@gluster-poc-noida glusterfs]# gluster volume status
Status of volume: glusterep
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster-poc-noida:/data/gluster/gv0 49152 0 Y 22463
Brick noi-poc-gluster:/data/gluster/gv0 49152 0 Y 19471
Self-heal Daemon on localhost N/A N/A Y 32087
Self-heal Daemon on noi-poc-gluster N/A N/A Y 6272

Task Status of Volume glusterep
------------------------------------------------------------------------------
There are no active volume tasks



[***@gluster-poc-noida glusterfs]# gluster volume info

Volume Name: glusterep
Type: Replicate
Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster/gv0
Brick2: noi-poc-gluster:/data/gluster/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
[***@gluster-poc-noida glusterfs]#

Could you please help me in that also please?

It would be really a great help from your side.

/Krishna
From: Kotresh Hiremath Ravishankar <***@redhat.com>
Sent: Wednesday, August 29, 2018 10:47 AM
To: Krishna Verma <***@cadence.com>
Cc: Sunny Kumar <***@redhat.com>; Gluster Users <gluster-***@gluster.org>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Answer inline

On Tue, Aug 28, 2018 at 4:28 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

I created the links before. Below is the detail.

[***@gluster-poc-noida ~]# ls -l /usr/lib64 | grep libgfch
lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1

The link created is pointing to wrong library. Please fix this

#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so

lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]# locate libgfchangelog.so
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]#

Is it looks good what we exactly need or di I need to create any more link or How to get “libgfchangelog.so” file if missing.

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:22 PM
To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which is what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"

It should be something like below.

#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so -> libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1

On Tue, Aug 28, 2018 at 4:04 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Thanks for the response, I did that also but nothing changed.

[***@gluster-poc-noida ~]# ldconfig /usr/lib64
[***@gluster-poc-noida ~]# ldconfig -p | grep libgfchangelog
libgfchangelog.so.0 (libc6,x86-64) => /usr/lib64/libgfchangelog.so.0
[***@gluster-poc-noida ~]#

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
[***@gluster-poc-noida ~]#

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:00 PM
To: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>
Cc: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog


On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar <***@redhat.com<mailto:***@redhat.com>> wrote:
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for geo-replication. But again same errors in the logs.
I have attaching the config files and logs here.
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Deleting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Creating geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not can you please apply that.
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0 /usr/lib64/libgfchangelog.so.
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the same error.
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] RepceClient: call failed call=26469:139794524604224:1535440781.44 method=init error=OSError
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
Gluster-***@gluster.org<mailto:Gluster-***@gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-users<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
Kotresh Hiremath Ravishankar
2018-08-30 05:21:02 UTC
Permalink
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.

-Kotresh HR
Post by Krishna Verma
Hi Kotresh,
Thank you so much for you input. Geo-replication is now showing “Active”
atleast for 1 master node. But its still at faulty state for the 2nd
master server.
Below is the detail.
glusterep gluster-poc-sj::glusterep status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
--------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl
2018-08-29 23:56:06
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A
N/A
Status of volume: glusterep
Gluster process TCP Port RDMA Port Online
Pid
------------------------------------------------------------
------------------
Brick gluster-poc-noida:/data/gluster/gv0 49152 0 Y
22463
Brick noi-poc-gluster:/data/gluster/gv0 49152 0 Y
19471
Self-heal Daemon on localhost N/A N/A Y
32087
Self-heal Daemon on noi-poc-gluster N/A N/A Y
6272
Task Status of Volume glusterep
------------------------------------------------------------
------------------
There are no active volume tasks
Volume Name: glusterep
Type: Replicate
Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: gluster-poc-noida:/data/gluster/gv0
Brick2: noi-poc-gluster:/data/gluster/gv0
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
Could you please help me in that also please?
It would be really a great help from your side.
/Krishna
*Sent:* Wednesday, August 29, 2018 10:47 AM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Answer inline
Hi Kotresh,
I created the links before. Below is the detail.
lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so ->
/usr/lib64/libgfchangelog.so.1
The link created is pointing to wrong library. Please fix this
#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so
lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 ->
libgfchangelog.so.0.0.1
-rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
Is it looks good what we exactly need or di I need to create any more link
or How to get “libgfchangelog.so” file if missing.
/Krishna
*Sent:* Tuesday, August 28, 2018 4:22 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which
is what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is
present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"
It should be something like below.
#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la
<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so -> libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1
Hi Kotresh,
Thanks for the response, I did that also but nothing changed.
libgfchangelog.so.0 (libc6,x86-64) =>
/usr/lib64/libgfchangelog.so.0
gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
-----------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty
N/A N/A
/Krishna
*Sent:* Tuesday, August 28, 2018 4:00 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for
geo-replication. But again same errors in the logs.
Post by Krishna Verma
I have attaching the config files and logs here.
gluster-poc-sj::glusterep stop
Post by Krishna Verma
Stopping geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep delete
Post by Krishna Verma
Deleting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep create push-pem force
Post by Krishna Verma
Creating geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep start
Post by Krishna Verma
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
gluster-poc-sj::glusterep start
Post by Krishna Verma
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
syncdaemon/repce.py
gluster-poc-sj::glusterep start
Post by Krishna Verma
Starting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
Post by Krishna Verma
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
Post by Krishna Verma
------------------------------------------------------------
------------------------------------------------------------
-----------------------------
Post by Krishna Verma
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
Post by Krishna Verma
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
Post by Krishna Verma
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
Post by Krishna Verma
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.
gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=
aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_
6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=
fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not
can you please apply that.
Post by Krishna Verma
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.
Post by Krishna Verma
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the
same error.
Post by Krishna Verma
Post by Krishna Verma
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker
/data/gluster/gv0):197:__call__] RepceClient: call failed
call=26469:139794524604224:1535440781.44 method=init
error=OSError
Post by Krishna Verma
Post by Krishna Verma
Post by Krishna Verma
[2018-08-28 07:19:41.447041] E [syncdutils(worker
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent
/data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching
EOF.
Post by Krishna Verma
Post by Krishna Verma
Post by Krishna Verma
[2018-08-28 07:19:42.440184] I [monitor(monitor):272:monitor]
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
Kotresh Hiremath Ravishankar
2018-08-30 09:49:50 UTC
Permalink
Post by Krishna Verma
Hi Kotresh,
After fix the library link on node "noi-poc-gluster ", the status of one
mater node is “Active” and another is “Passive”. Can I setup both the
master as “Active” ?
Nope, since it's replica, it's redundant to sync same files from two nodes.
Both replicas can't be Active.
Post by Krishna Verma
Also, when I copy a 1GB size of file from gluster client to master gluster
volume which is replicated with the slave volume, it tooks 35 minutes and
49 seconds. Is there any way to reduce its time taken to rsync data.
How did you measure this time? Does this include the time take for you to
write 1GB file to master?
There are two aspects to consider while measuring this.

1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.

In your case, since the setup is 1*2 and only one geo-rep worker is Active,
Step2 above equals to time for step1 + network transfer time.

You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master.
It's one way.
2. Or stop geo-rep until the 1GB file is written to master and then start
geo-rep to get actual geo-rep time.

To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like 3*3 or 4*3
It will help in two ways.
1. The files gets distributed on master to multiple bricks
2. So above will help geo-rep as files on multiple bricks are synced
in parallel (multiple Actives)
Post by Krishna Verma
NOTE: Gluster master server and one client is in Noida, India Location.
Gluster Slave server and one client is in USA.
Our approach is to transfer data from Noida gluster client will reach to
the USA gluster client in a minimum time. Please suggest the best approach
to achieve it.
/glusterfs/ ; date
Thu Aug 30 12:26:26 IST 2018
sending incremental file list
gentoo_root.img
1.07G 100% 490.70kB/s 0:35:36 (xfr#1, to-chk=0/1)
Is this I/O time to write to master volume?
Post by Krishna Verma
sent 1.07G bytes received 35 bytes 499.65K bytes/sec
total size is 1.07G speedup is 1.00
Thu Aug 30 13:02:15 IST 2018
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL
STATUS LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
---------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
ssh://gluster-poc-sj::glusterep gluster-poc-sj Active Changelog
Crawl 2018-08-30 13:42:18
noi-poc-gluster glusterep /data/gluster/gv0 root
ssh://gluster-poc-sj::glusterep gluster-poc-sj Passive
N/A N/A
Thanks in advance for your all time support.
/Krishna
*Sent:* Thursday, August 30, 2018 10:51 AM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.
-Kotresh HR
Hi Kotresh,
Thank you so much for you input. Geo-replication is now showing “Active”
atleast for 1 master node. But its still at faulty state for the 2nd
master server.
Below is the detail.
glusterep gluster-poc-sj::glusterep status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
--------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl
2018-08-29 23:56:06
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A
N/A
Status of volume: glusterep
Gluster process TCP Port RDMA Port Online
Pid
------------------------------------------------------------
------------------
Brick gluster-poc-noida:/data/gluster/gv0 49152 0 Y
22463
Brick noi-poc-gluster:/data/gluster/gv0 49152 0 Y
19471
Self-heal Daemon on localhost N/A N/A Y
32087
Self-heal Daemon on noi-poc-gluster N/A N/A Y
6272
Task Status of Volume glusterep
------------------------------------------------------------
------------------
There are no active volume tasks
Volume Name: glusterep
Type: Replicate
Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: gluster-poc-noida:/data/gluster/gv0
Brick2: noi-poc-gluster:/data/gluster/gv0
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
Could you please help me in that also please?
It would be really a great help from your side.
/Krishna
*Sent:* Wednesday, August 29, 2018 10:47 AM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Answer inline
Hi Kotresh,
I created the links before. Below is the detail.
lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so ->
/usr/lib64/libgfchangelog.so.1
The link created is pointing to wrong library. Please fix this
#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so
lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 ->
libgfchangelog.so.0.0.1
-rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
Is it looks good what we exactly need or di I need to create any more link
or How to get “libgfchangelog.so” file if missing.
/Krishna
*Sent:* Tuesday, August 28, 2018 4:22 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which
is what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is
present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"
It should be something like below.
#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la
<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so -> libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1
Hi Kotresh,
Thanks for the response, I did that also but nothing changed.
libgfchangelog.so.0 (libc6,x86-64) =>
/usr/lib64/libgfchangelog.so.0
gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
-----------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty
N/A N/A
/Krishna
*Sent:* Tuesday, August 28, 2018 4:00 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for
geo-replication. But again same errors in the logs.
Post by Krishna Verma
I have attaching the config files and logs here.
gluster-poc-sj::glusterep stop
Post by Krishna Verma
Stopping geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep delete
Post by Krishna Verma
Deleting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep create push-pem force
Post by Krishna Verma
Creating geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep start
Post by Krishna Verma
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
gluster-poc-sj::glusterep start
Post by Krishna Verma
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
syncdaemon/repce.py
gluster-poc-sj::glusterep start
Post by Krishna Verma
Starting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
Post by Krishna Verma
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
Post by Krishna Verma
------------------------------------------------------------
------------------------------------------------------------
-----------------------------
Post by Krishna Verma
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
Post by Krishna Verma
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
Post by Krishna Verma
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
Post by Krishna Verma
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.
gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=
aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_
6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=
fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not
can you please apply that.
Post by Krishna Verma
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.
Post by Krishna Verma
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the
same error.
Post by Krishna Verma
Post by Krishna Verma
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker
/data/gluster/gv0):197:__call__] RepceClient: call failed
call=26469:139794524604224:1535440781.44 method=init
error=OSError
Post by Krishna Verma
Post by Krishna Verma
Post by Krishna Verma
[2018-08-28 07:19:41.447041] E [syncdutils(worker
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent
/data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching
EOF.
Post by Krishna Verma
Post by Krishna Verma
Post by Krishna Verma
[2018-08-28 07:19:42.440184] I [monitor(monitor):272:monitor]
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
Krishna Verma
2018-08-30 10:21:32 UTC
Permalink
Hi Kotresh,

Yes, this include the time take to write 1GB file to master. geo-rep was not stopped while the data was copying to master.

But now I am trouble, My putty session was timed out while copying data to master and geo replication was active. After I restart putty session My Master data is not syncing with slave. Its Last_synced time is 1hrs behind the current time.

I restart the geo rep and also delete and again create the session but its “LAST_SYNCED” time is same.

Please help in this.


. It's better if gluster volume has more distribute count like 3*3 or 4*3 :- Are you refereeing to create a distributed volume with 3 master node and 3 slave node?


/krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com>
Sent: Thursday, August 30, 2018 3:20 PM
To: Krishna Verma <***@cadence.com>
Cc: Sunny Kumar <***@redhat.com>; Gluster Users <gluster-***@gluster.org>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 1:52 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

After fix the library link on node "noi-poc-gluster ", the status of one mater node is “Active” and another is “Passive”. Can I setup both the master as “Active” ?

Nope, since it's replica, it's redundant to sync same files from two nodes. Both replicas can't be Active.


Also, when I copy a 1GB size of file from gluster client to master gluster volume which is replicated with the slave volume, it tooks 35 minutes and 49 seconds. Is there any way to reduce its time taken to rsync data.

How did you measure this time? Does this include the time take for you to write 1GB file to master?
There are two aspects to consider while measuring this.

1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.

In your case, since the setup is 1*2 and only one geo-rep worker is Active, Step2 above equals to time for step1 + network transfer time.

You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master. It's one way.
2. Or stop geo-rep until the 1GB file is written to master and then start geo-rep to get actual geo-rep time.

To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like 3*3 or 4*3
It will help in two ways.
1. The files gets distributed on master to multiple bricks
2. So above will help geo-rep as files on multiple bricks are synced in parallel (multiple Actives)

NOTE: Gluster master server and one client is in Noida, India Location.
Gluster Slave server and one client is in USA.

Our approach is to transfer data from Noida gluster client will reach to the USA gluster client in a minimum time. Please suggest the best approach to achieve it.

[***@noi-dcops ~]# date ; rsync -avh --progress /tmp/gentoo_root.img /glusterfs/ ; date
Thu Aug 30 12:26:26 IST 2018
sending incremental file list
gentoo_root.img
1.07G 100% 490.70kB/s 0:35:36 (xfr#1, to-chk=0/1)

Is this I/O time to write to master volume?

sent 1.07G bytes received 35 bytes 499.65K bytes/sec
total size is 1.07G speedup is 1.00
Thu Aug 30 13:02:15 IST 2018
[***@noi-dcops ~]#



[***@gluster-poc-noida gluster]# gluster volume geo-replication status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root ssh://gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-30 13:42:18
noi-poc-gluster glusterep /data/gluster/gv0 root ssh://gluster-poc-sj::glusterep gluster-poc-sj Passive N/A N/A
[***@gluster-poc-noida gluster]#

Thanks in advance for your all time support.

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Thursday, August 30, 2018 10:51 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.
-Kotresh HR

On Thu, Aug 30, 2018 at 12:17 AM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Thank you so much for you input. Geo-replication is now showing “Active” atleast for 1 master node. But its still at faulty state for the 2nd master server.

Below is the detail.

[***@gluster-poc-noida glusterfs]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-29 23:56:06
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A


[***@gluster-poc-noida glusterfs]# gluster volume status
Status of volume: glusterep
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster-poc-noida:/data/gluster/gv0 49152 0 Y 22463
Brick noi-poc-gluster:/data/gluster/gv0 49152 0 Y 19471
Self-heal Daemon on localhost N/A N/A Y 32087
Self-heal Daemon on noi-poc-gluster N/A N/A Y 6272

Task Status of Volume glusterep
------------------------------------------------------------------------------
There are no active volume tasks



[***@gluster-poc-noida glusterfs]# gluster volume info

Volume Name: glusterep
Type: Replicate
Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster/gv0
Brick2: noi-poc-gluster:/data/gluster/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
[***@gluster-poc-noida glusterfs]#

Could you please help me in that also please?

It would be really a great help from your side.

/Krishna
From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Wednesday, August 29, 2018 10:47 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Answer inline

On Tue, Aug 28, 2018 at 4:28 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

I created the links before. Below is the detail.

[***@gluster-poc-noida ~]# ls -l /usr/lib64 | grep libgfch
lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1

The link created is pointing to wrong library. Please fix this

#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so

lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]# locate libgfchangelog.so
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]#

Is it looks good what we exactly need or di I need to create any more link or How to get “libgfchangelog.so” file if missing.

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:22 PM
To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which is what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"

It should be something like below.

#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so -> libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1

On Tue, Aug 28, 2018 at 4:04 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Thanks for the response, I did that also but nothing changed.

[***@gluster-poc-noida ~]# ldconfig /usr/lib64
[***@gluster-poc-noida ~]# ldconfig -p | grep libgfchangelog
libgfchangelog.so.0 (libc6,x86-64) => /usr/lib64/libgfchangelog.so.0
[***@gluster-poc-noida ~]#

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
[***@gluster-poc-noida ~]#

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:00 PM
To: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>
Cc: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog


On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar <***@redhat.com<mailto:***@redhat.com>> wrote:
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for geo-replication. But again same errors in the logs.
I have attaching the config files and logs here.
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Deleting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Creating geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not can you please apply that.
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0 /usr/lib64/libgfchangelog.so.
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the same error.
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] RepceClient: call failed call=26469:139794524604224:1535440781.44 method=init error=OSError
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
Gluster-***@gluster.org<mailto:Gluster-***@gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-users<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
Krishna Verma
2018-08-30 10:37:03 UTC
Permalink
Hi Kotresh,

Just to update for sync issue.

I erased the indexing by doing gluster volume set <vol-name> indexing off,
after stopping the geo-rep session and then start the session. Now its syncing.

For your query:

Yes, this include the time take to write 1GB file to master. geo-rep was not stopped while the data was copying to master.


. It's better if gluster volume has more distribute count like 3*3 or 4*3 :- Are you refereeing to create a distributed volume with 3 master node and 3 slave node?


/Krishna


From: Krishna Verma
Sent: Thursday, August 30, 2018 3:52 PM
To: 'Kotresh Hiremath Ravishankar' <***@redhat.com>
Cc: Sunny Kumar <***@redhat.com>; Gluster Users <gluster-***@gluster.org>
Subject: RE: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

Hi Kotresh,

Yes, this include the time take to write 1GB file to master. geo-rep was not stopped while the data was copying to master.

But now I am trouble, My putty session was timed out while copying data to master and geo replication was active. After I restart putty session My Master data is not syncing with slave. Its Last_synced time is 1hrs behind the current time.

I restart the geo rep and also delete and again create the session but its “LAST_SYNCED” time is same.

Please help in this.


. It's better if gluster volume has more distribute count like 3*3 or 4*3 :- Are you refereeing to create a distributed volume with 3 master node and 3 slave node?


/krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Thursday, August 30, 2018 3:20 PM
To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 1:52 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

After fix the library link on node "noi-poc-gluster ", the status of one mater node is “Active” and another is “Passive”. Can I setup both the master as “Active” ?

Nope, since it's replica, it's redundant to sync same files from two nodes. Both replicas can't be Active.


Also, when I copy a 1GB size of file from gluster client to master gluster volume which is replicated with the slave volume, it tooks 35 minutes and 49 seconds. Is there any way to reduce its time taken to rsync data.

How did you measure this time? Does this include the time take for you to write 1GB file to master?
There are two aspects to consider while measuring this.

1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.

In your case, since the setup is 1*2 and only one geo-rep worker is Active, Step2 above equals to time for step1 + network transfer time.

You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master. It's one way.
2. Or stop geo-rep until the 1GB file is written to master and then start geo-rep to get actual geo-rep time.

To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like 3*3 or 4*3
It will help in two ways.
1. The files gets distributed on master to multiple bricks
2. So above will help geo-rep as files on multiple bricks are synced in parallel (multiple Actives)

NOTE: Gluster master server and one client is in Noida, India Location.
Gluster Slave server and one client is in USA.

Our approach is to transfer data from Noida gluster client will reach to the USA gluster client in a minimum time. Please suggest the best approach to achieve it.

[***@noi-dcops ~]# date ; rsync -avh --progress /tmp/gentoo_root.img /glusterfs/ ; date
Thu Aug 30 12:26:26 IST 2018
sending incremental file list
gentoo_root.img
1.07G 100% 490.70kB/s 0:35:36 (xfr#1, to-chk=0/1)

Is this I/O time to write to master volume?

sent 1.07G bytes received 35 bytes 499.65K bytes/sec
total size is 1.07G speedup is 1.00
Thu Aug 30 13:02:15 IST 2018
[***@noi-dcops ~]#



[***@gluster-poc-noida gluster]# gluster volume geo-replication status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root ssh://gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-30 13:42:18
noi-poc-gluster glusterep /data/gluster/gv0 root ssh://gluster-poc-sj::glusterep gluster-poc-sj Passive N/A N/A
[***@gluster-poc-noida gluster]#

Thanks in advance for your all time support.

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Thursday, August 30, 2018 10:51 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.
-Kotresh HR

On Thu, Aug 30, 2018 at 12:17 AM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Thank you so much for you input. Geo-replication is now showing “Active” atleast for 1 master node. But its still at faulty state for the 2nd master server.

Below is the detail.

[***@gluster-poc-noida glusterfs]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-29 23:56:06
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A


[***@gluster-poc-noida glusterfs]# gluster volume status
Status of volume: glusterep
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster-poc-noida:/data/gluster/gv0 49152 0 Y 22463
Brick noi-poc-gluster:/data/gluster/gv0 49152 0 Y 19471
Self-heal Daemon on localhost N/A N/A Y 32087
Self-heal Daemon on noi-poc-gluster N/A N/A Y 6272

Task Status of Volume glusterep
------------------------------------------------------------------------------
There are no active volume tasks



[***@gluster-poc-noida glusterfs]# gluster volume info

Volume Name: glusterep
Type: Replicate
Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster/gv0
Brick2: noi-poc-gluster:/data/gluster/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
[***@gluster-poc-noida glusterfs]#

Could you please help me in that also please?

It would be really a great help from your side.

/Krishna
From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Wednesday, August 29, 2018 10:47 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Answer inline

On Tue, Aug 28, 2018 at 4:28 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

I created the links before. Below is the detail.

[***@gluster-poc-noida ~]# ls -l /usr/lib64 | grep libgfch
lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1

The link created is pointing to wrong library. Please fix this

#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so

lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]# locate libgfchangelog.so
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]#

Is it looks good what we exactly need or di I need to create any more link or How to get “libgfchangelog.so” file if missing.

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:22 PM
To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which is what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"

It should be something like below.

#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so -> libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1

On Tue, Aug 28, 2018 at 4:04 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Thanks for the response, I did that also but nothing changed.

[***@gluster-poc-noida ~]# ldconfig /usr/lib64
[***@gluster-poc-noida ~]# ldconfig -p | grep libgfchangelog
libgfchangelog.so.0 (libc6,x86-64) => /usr/lib64/libgfchangelog.so.0
[***@gluster-poc-noida ~]#

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
[***@gluster-poc-noida ~]#

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:00 PM
To: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>
Cc: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog


On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar <***@redhat.com<mailto:***@redhat.com>> wrote:
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for geo-replication. But again same errors in the logs.
I have attaching the config files and logs here.
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Deleting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Creating geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not can you please apply that.
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0 /usr/lib64/libgfchangelog.so.
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the same error.
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] RepceClient: call failed call=26469:139794524604224:1535440781.44 method=init error=OSError
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
Gluster-***@gluster.org<mailto:Gluster-***@gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-users<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
Kotresh Hiremath Ravishankar
2018-08-31 04:40:06 UTC
Permalink
Post by Krishna Verma
Hi Kotresh,
Yes, this include the time take to write 1GB file to master. geo-rep was
not stopped while the data was copying to master.
This way, you can't really measure how much time geo-rep took.
Post by Krishna Verma
But now I am trouble, My putty session was timed out while copying data to
master and geo replication was active. After I restart putty session My
Master data is not syncing with slave. Its Last_synced time is 1hrs behind
the current time.
I restart the geo rep and also delete and again create the session but its
“LAST_SYNCED” time is same.
Unless, geo-rep is Faulty, it would be processing/syncing. You should check
logs for any errors.
Post by Krishna Verma
Please help in this.

. It's better if gluster volume has more distribute count like 3*3 or
4*3 :- Are you refereeing to create a distributed volume with 3 master
node and 3 slave node?
Yes, that's correct. Please do the test with this. I recommend you to run
the actual workload for which you are planning to use gluster instead of
copying 1GB file and testing.
Post by Krishna Verma
/krishna
*Sent:* Thursday, August 30, 2018 3:20 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Kotresh,
After fix the library link on node "noi-poc-gluster ", the status of one
mater node is “Active” and another is “Passive”. Can I setup both the
master as “Active” ?
Nope, since it's replica, it's redundant to sync same files from two
nodes. Both replicas can't be Active.
Also, when I copy a 1GB size of file from gluster client to master gluster
volume which is replicated with the slave volume, it tooks 35 minutes and
49 seconds. Is there any way to reduce its time taken to rsync data.
How did you measure this time? Does this include the time take for you to
write 1GB file to master?
There are two aspects to consider while measuring this.
1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.
In your case, since the setup is 1*2 and only one geo-rep worker is
Active, Step2 above equals to time for step1 + network transfer time.
You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master. It's one way.
2. Or stop geo-rep until the 1GB file is written to master and then start
geo-rep to get actual geo-rep time.
To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like 3*3 or 4*3
It will help in two ways.
1. The files gets distributed on master to multiple bricks
2. So above will help geo-rep as files on multiple bricks are
synced in parallel (multiple Actives)
NOTE: Gluster master server and one client is in Noida, India Location.
Gluster Slave server and one client is in USA.
Our approach is to transfer data from Noida gluster client will reach to
the USA gluster client in a minimum time. Please suggest the best approach
to achieve it.
Thu Aug 30 12:26:26 IST 2018
sending incremental file list
gentoo_root.img
1.07G 100% 490.70kB/s 0:35:36 (xfr#1, to-chk=0/1)
Is this I/O time to write to master volume?
sent 1.07G bytes received 35 bytes 499.65K bytes/sec
total size is 1.07G speedup is 1.00
Thu Aug 30 13:02:15 IST 2018
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL
STATUS LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
---------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
ssh://gluster-poc-sj::glusterep gluster-poc-sj Active Changelog
Crawl 2018-08-30 13:42:18
noi-poc-gluster glusterep /data/gluster/gv0 root
ssh://gluster-poc-sj::glusterep gluster-poc-sj Passive
N/A N/A
Thanks in advance for your all time support.
/Krishna
*Sent:* Thursday, August 30, 2018 10:51 AM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.
-Kotresh HR
Hi Kotresh,
Thank you so much for you input. Geo-replication is now showing “Active”
atleast for 1 master node. But its still at faulty state for the 2nd
master server.
Below is the detail.
glusterep gluster-poc-sj::glusterep status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
--------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl
2018-08-29 23:56:06
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A
N/A
Status of volume: glusterep
Gluster process TCP Port RDMA Port Online
Pid
------------------------------------------------------------
------------------
Brick gluster-poc-noida:/data/gluster/gv0 49152 0 Y
22463
Brick noi-poc-gluster:/data/gluster/gv0 49152 0 Y
19471
Self-heal Daemon on localhost N/A N/A Y
32087
Self-heal Daemon on noi-poc-gluster N/A N/A Y
6272
Task Status of Volume glusterep
------------------------------------------------------------
------------------
There are no active volume tasks
Volume Name: glusterep
Type: Replicate
Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: gluster-poc-noida:/data/gluster/gv0
Brick2: noi-poc-gluster:/data/gluster/gv0
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
Could you please help me in that also please?
It would be really a great help from your side.
/Krishna
*Sent:* Wednesday, August 29, 2018 10:47 AM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Answer inline
Hi Kotresh,
I created the links before. Below is the detail.
lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so ->
/usr/lib64/libgfchangelog.so.1
The link created is pointing to wrong library. Please fix this
#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so
lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 ->
libgfchangelog.so.0.0.1
-rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
Is it looks good what we exactly need or di I need to create any more link
or How to get “libgfchangelog.so” file if missing.
/Krishna
*Sent:* Tuesday, August 28, 2018 4:22 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which
is what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is
present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"
It should be something like below.
#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la
<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so -> libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1
Hi Kotresh,
Thanks for the response, I did that also but nothing changed.
libgfchangelog.so.0 (libc6,x86-64) =>
/usr/lib64/libgfchangelog.so.0
gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
-----------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty
N/A N/A
/Krishna
*Sent:* Tuesday, August 28, 2018 4:00 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for
geo-replication. But again same errors in the logs.
Post by Krishna Verma
I have attaching the config files and logs here.
gluster-poc-sj::glusterep stop
Post by Krishna Verma
Stopping geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep delete
Post by Krishna Verma
Deleting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep create push-pem force
Post by Krishna Verma
Creating geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep start
Post by Krishna Verma
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
gluster-poc-sj::glusterep start
Post by Krishna Verma
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
syncdaemon/repce.py
gluster-poc-sj::glusterep start
Post by Krishna Verma
Starting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
Post by Krishna Verma
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
Post by Krishna Verma
------------------------------------------------------------
------------------------------------------------------------
-----------------------------
Post by Krishna Verma
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
Post by Krishna Verma
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
Post by Krishna Verma
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
Post by Krishna Verma
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.
gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=
aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_
6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=
fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not
can you please apply that.
Post by Krishna Verma
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.
Post by Krishna Verma
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the
same error.
Post by Krishna Verma
Post by Krishna Verma
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker
/data/gluster/gv0):197:__call__] RepceClient: call failed
call=26469:139794524604224:1535440781.44 method=init
error=OSError
Post by Krishna Verma
Post by Krishna Verma
Post by Krishna Verma
[2018-08-28 07:19:41.447041] E [syncdutils(worker
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent
/data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching
EOF.
Post by Krishna Verma
Post by Krishna Verma
Post by Krishna Verma
[2018-08-28 07:19:42.440184] I [monitor(monitor):272:monitor]
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
Krishna Verma
2018-08-31 07:12:26 UTC
Permalink
Hi Kotresh,

I have tested the geo replication over distributed volumes with 2*2 gluster setup.

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj Active Changelog Crawl 2018-08-31 10:28:19
noi-poc-gluster glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj2 Active History Crawl N/A
[***@gluster-poc-noida ~]#

Not at client I copied a 848MB file from local disk to master mounted volume and it took only 1 minute and 15 seconds. Its great
.

But even after waited for 2 hrs I was unable to see that file at slave site. Then I again erased the indexing by doing “gluster volume set glusterdist indexing off” and restart the session. Magically I received the file instantly at slave after doing this.

Why I need to do “indexing off” every time to reflect data at slave site? Is there any fix/workaround of it?

/Krishna


From: Kotresh Hiremath Ravishankar <***@redhat.com>
Sent: Friday, August 31, 2018 10:10 AM
To: Krishna Verma <***@cadence.com>
Cc: Sunny Kumar <***@redhat.com>; Gluster Users <gluster-***@gluster.org>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 3:51 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Yes, this include the time take to write 1GB file to master. geo-rep was not stopped while the data was copying to master.

This way, you can't really measure how much time geo-rep took.


But now I am trouble, My putty session was timed out while copying data to master and geo replication was active. After I restart putty session My Master data is not syncing with slave. Its Last_synced time is 1hrs behind the current time.

I restart the geo rep and also delete and again create the session but its “LAST_SYNCED” time is same.

Unless, geo-rep is Faulty, it would be processing/syncing. You should check logs for any errors.


Please help in this.


. It's better if gluster volume has more distribute count like 3*3 or 4*3 :- Are you refereeing to create a distributed volume with 3 master node and 3 slave node?

Yes, that's correct. Please do the test with this. I recommend you to run the actual workload for which you are planning to use gluster instead of copying 1GB file and testing.



/krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Thursday, August 30, 2018 3:20 PM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 1:52 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

After fix the library link on node "noi-poc-gluster ", the status of one mater node is “Active” and another is “Passive”. Can I setup both the master as “Active” ?

Nope, since it's replica, it's redundant to sync same files from two nodes. Both replicas can't be Active.


Also, when I copy a 1GB size of file from gluster client to master gluster volume which is replicated with the slave volume, it tooks 35 minutes and 49 seconds. Is there any way to reduce its time taken to rsync data.

How did you measure this time? Does this include the time take for you to write 1GB file to master?
There are two aspects to consider while measuring this.

1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.

In your case, since the setup is 1*2 and only one geo-rep worker is Active, Step2 above equals to time for step1 + network transfer time.

You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master. It's one way.
2. Or stop geo-rep until the 1GB file is written to master and then start geo-rep to get actual geo-rep time.

To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like 3*3 or 4*3
It will help in two ways.
1. The files gets distributed on master to multiple bricks
2. So above will help geo-rep as files on multiple bricks are synced in parallel (multiple Actives)

NOTE: Gluster master server and one client is in Noida, India Location.
Gluster Slave server and one client is in USA.

Our approach is to transfer data from Noida gluster client will reach to the USA gluster client in a minimum time. Please suggest the best approach to achieve it.

[***@noi-dcops ~]# date ; rsync -avh --progress /tmp/gentoo_root.img /glusterfs/ ; date
Thu Aug 30 12:26:26 IST 2018
sending incremental file list
gentoo_root.img
1.07G 100% 490.70kB/s 0:35:36 (xfr#1, to-chk=0/1)

Is this I/O time to write to master volume?

sent 1.07G bytes received 35 bytes 499.65K bytes/sec
total size is 1.07G speedup is 1.00
Thu Aug 30 13:02:15 IST 2018
[***@noi-dcops ~]#



[***@gluster-poc-noida gluster]# gluster volume geo-replication status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root ssh://gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-30 13:42:18
noi-poc-gluster glusterep /data/gluster/gv0 root ssh://gluster-poc-sj::glusterep gluster-poc-sj Passive N/A N/A
[***@gluster-poc-noida gluster]#

Thanks in advance for your all time support.

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Thursday, August 30, 2018 10:51 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.
-Kotresh HR

On Thu, Aug 30, 2018 at 12:17 AM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Thank you so much for you input. Geo-replication is now showing “Active” atleast for 1 master node. But its still at faulty state for the 2nd master server.

Below is the detail.

[***@gluster-poc-noida glusterfs]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-29 23:56:06
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A


[***@gluster-poc-noida glusterfs]# gluster volume status
Status of volume: glusterep
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster-poc-noida:/data/gluster/gv0 49152 0 Y 22463
Brick noi-poc-gluster:/data/gluster/gv0 49152 0 Y 19471
Self-heal Daemon on localhost N/A N/A Y 32087
Self-heal Daemon on noi-poc-gluster N/A N/A Y 6272

Task Status of Volume glusterep
------------------------------------------------------------------------------
There are no active volume tasks



[***@gluster-poc-noida glusterfs]# gluster volume info

Volume Name: glusterep
Type: Replicate
Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster/gv0
Brick2: noi-poc-gluster:/data/gluster/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
[***@gluster-poc-noida glusterfs]#

Could you please help me in that also please?

It would be really a great help from your side.

/Krishna
From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Wednesday, August 29, 2018 10:47 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Answer inline

On Tue, Aug 28, 2018 at 4:28 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

I created the links before. Below is the detail.

[***@gluster-poc-noida ~]# ls -l /usr/lib64 | grep libgfch
lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1

The link created is pointing to wrong library. Please fix this

#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so

lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]# locate libgfchangelog.so
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]#

Is it looks good what we exactly need or di I need to create any more link or How to get “libgfchangelog.so” file if missing.

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:22 PM
To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which is what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"

It should be something like below.

#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so -> libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1

On Tue, Aug 28, 2018 at 4:04 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Thanks for the response, I did that also but nothing changed.

[***@gluster-poc-noida ~]# ldconfig /usr/lib64
[***@gluster-poc-noida ~]# ldconfig -p | grep libgfchangelog
libgfchangelog.so.0 (libc6,x86-64) => /usr/lib64/libgfchangelog.so.0
[***@gluster-poc-noida ~]#

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
[***@gluster-poc-noida ~]#

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:00 PM
To: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>
Cc: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog


On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar <***@redhat.com<mailto:***@redhat.com>> wrote:
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for geo-replication. But again same errors in the logs.
I have attaching the config files and logs here.
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Deleting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Creating geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not can you please apply that.
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0 /usr/lib64/libgfchangelog.so.
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the same error.
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] RepceClient: call failed call=26469:139794524604224:1535440781.44 method=init error=OSError
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
Gluster-***@gluster.org<mailto:Gluster-***@gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-users<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
Krishna Verma
2018-09-03 04:03:20 UTC
Permalink
Hi Kotresh/Support,

Request your help to get it fix. My slave is not getting sync with master. When I restart the session after doing the indexing off then only it shows the file at slave but that is also blank with zero size.

At master: file size is 5.8 GB.

[***@gluster-poc-noida distvol]# du -sh 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
5.8G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[***@gluster-poc-noida distvol]#

But at slave, after doing the “indexing off” and restart the session and then wait for 2 days. It shows only 4.9 GB copied.

[***@gluster-poc-sj distvol]# du -sh 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
4.9G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[***@gluster-poc-sj distvol]#

Similarly, I tested for small file of size 1.2 GB only that is still showing “0” size at slave after days waiting time.

At Master:

[***@gluster-poc-noida distvol]# du -sh rflowTestInt18.08-b001.t.Z
1.2G rflowTestInt18.08-b001.t.Z
[***@gluster-poc-noida distvol]#

At Slave:

[***@gluster-poc-sj distvol]# du -sh rflowTestInt18.08-b001.t.Z
0 rflowTestInt18.08-b001.t.Z
[***@gluster-poc-sj distvol]#

Below is my distributed volume info :

[***@gluster-poc-noida distvol]# gluster volume info glusterdist

Volume Name: glusterdist
Type: Distribute
Volume ID: af5b2915-7170-4b5e-aee8-7e68757b9bf1
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster-dist/distvol
Brick2: noi-poc-gluster:/data/gluster-dist/distvol
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
transport.address-family: inet
nfs.disable: on
[***@gluster-poc-noida distvol]#

Please help to fix, I believe its not a normal behavior of gluster rsync.

/Krishna
From: Krishna Verma
Sent: Friday, August 31, 2018 12:42 PM
To: 'Kotresh Hiremath Ravishankar' <***@redhat.com>
Cc: Sunny Kumar <***@redhat.com>; Gluster Users <gluster-***@gluster.org>
Subject: RE: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

Hi Kotresh,

I have tested the geo replication over distributed volumes with 2*2 gluster setup.

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj Active Changelog Crawl 2018-08-31 10:28:19
noi-poc-gluster glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj2 Active History Crawl N/A
[***@gluster-poc-noida ~]#

Not at client I copied a 848MB file from local disk to master mounted volume and it took only 1 minute and 15 seconds. Its great
.

But even after waited for 2 hrs I was unable to see that file at slave site. Then I again erased the indexing by doing “gluster volume set glusterdist indexing off” and restart the session. Magically I received the file instantly at slave after doing this.

Why I need to do “indexing off” every time to reflect data at slave site? Is there any fix/workaround of it?

/Krishna


From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Friday, August 31, 2018 10:10 AM
To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 3:51 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Yes, this include the time take to write 1GB file to master. geo-rep was not stopped while the data was copying to master.

This way, you can't really measure how much time geo-rep took.


But now I am trouble, My putty session was timed out while copying data to master and geo replication was active. After I restart putty session My Master data is not syncing with slave. Its Last_synced time is 1hrs behind the current time.

I restart the geo rep and also delete and again create the session but its “LAST_SYNCED” time is same.

Unless, geo-rep is Faulty, it would be processing/syncing. You should check logs for any errors.


Please help in this.


. It's better if gluster volume has more distribute count like 3*3 or 4*3 :- Are you refereeing to create a distributed volume with 3 master node and 3 slave node?

Yes, that's correct. Please do the test with this. I recommend you to run the actual workload for which you are planning to use gluster instead of copying 1GB file and testing.



/krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Thursday, August 30, 2018 3:20 PM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 1:52 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

After fix the library link on node "noi-poc-gluster ", the status of one mater node is “Active” and another is “Passive”. Can I setup both the master as “Active” ?

Nope, since it's replica, it's redundant to sync same files from two nodes. Both replicas can't be Active.


Also, when I copy a 1GB size of file from gluster client to master gluster volume which is replicated with the slave volume, it tooks 35 minutes and 49 seconds. Is there any way to reduce its time taken to rsync data.

How did you measure this time? Does this include the time take for you to write 1GB file to master?
There are two aspects to consider while measuring this.

1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.

In your case, since the setup is 1*2 and only one geo-rep worker is Active, Step2 above equals to time for step1 + network transfer time.

You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master. It's one way.
2. Or stop geo-rep until the 1GB file is written to master and then start geo-rep to get actual geo-rep time.

To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like 3*3 or 4*3
It will help in two ways.
1. The files gets distributed on master to multiple bricks
2. So above will help geo-rep as files on multiple bricks are synced in parallel (multiple Actives)

NOTE: Gluster master server and one client is in Noida, India Location.
Gluster Slave server and one client is in USA.

Our approach is to transfer data from Noida gluster client will reach to the USA gluster client in a minimum time. Please suggest the best approach to achieve it.

[***@noi-dcops ~]# date ; rsync -avh --progress /tmp/gentoo_root.img /glusterfs/ ; date
Thu Aug 30 12:26:26 IST 2018
sending incremental file list
gentoo_root.img
1.07G 100% 490.70kB/s 0:35:36 (xfr#1, to-chk=0/1)

Is this I/O time to write to master volume?

sent 1.07G bytes received 35 bytes 499.65K bytes/sec
total size is 1.07G speedup is 1.00
Thu Aug 30 13:02:15 IST 2018
[***@noi-dcops ~]#



[***@gluster-poc-noida gluster]# gluster volume geo-replication status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root ssh://gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-30 13:42:18
noi-poc-gluster glusterep /data/gluster/gv0 root ssh://gluster-poc-sj::glusterep gluster-poc-sj Passive N/A N/A
[***@gluster-poc-noida gluster]#

Thanks in advance for your all time support.

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Thursday, August 30, 2018 10:51 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.
-Kotresh HR

On Thu, Aug 30, 2018 at 12:17 AM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Thank you so much for you input. Geo-replication is now showing “Active” atleast for 1 master node. But its still at faulty state for the 2nd master server.

Below is the detail.

[***@gluster-poc-noida glusterfs]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-29 23:56:06
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A


[***@gluster-poc-noida glusterfs]# gluster volume status
Status of volume: glusterep
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster-poc-noida:/data/gluster/gv0 49152 0 Y 22463
Brick noi-poc-gluster:/data/gluster/gv0 49152 0 Y 19471
Self-heal Daemon on localhost N/A N/A Y 32087
Self-heal Daemon on noi-poc-gluster N/A N/A Y 6272

Task Status of Volume glusterep
------------------------------------------------------------------------------
There are no active volume tasks



[***@gluster-poc-noida glusterfs]# gluster volume info

Volume Name: glusterep
Type: Replicate
Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster/gv0
Brick2: noi-poc-gluster:/data/gluster/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
[***@gluster-poc-noida glusterfs]#

Could you please help me in that also please?

It would be really a great help from your side.

/Krishna
From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Wednesday, August 29, 2018 10:47 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Answer inline

On Tue, Aug 28, 2018 at 4:28 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

I created the links before. Below is the detail.

[***@gluster-poc-noida ~]# ls -l /usr/lib64 | grep libgfch
lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1

The link created is pointing to wrong library. Please fix this

#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so

lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]# locate libgfchangelog.so
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]#

Is it looks good what we exactly need or di I need to create any more link or How to get “libgfchangelog.so” file if missing.

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:22 PM
To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which is what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"

It should be something like below.

#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so -> libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1

On Tue, Aug 28, 2018 at 4:04 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Thanks for the response, I did that also but nothing changed.

[***@gluster-poc-noida ~]# ldconfig /usr/lib64
[***@gluster-poc-noida ~]# ldconfig -p | grep libgfchangelog
libgfchangelog.so.0 (libc6,x86-64) => /usr/lib64/libgfchangelog.so.0
[***@gluster-poc-noida ~]#

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
[***@gluster-poc-noida ~]#

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:00 PM
To: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>
Cc: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog


On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar <***@redhat.com<mailto:***@redhat.com>> wrote:
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for geo-replication. But again same errors in the logs.
I have attaching the config files and logs here.
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Deleting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Creating geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not can you please apply that.
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0 /usr/lib64/libgfchangelog.so.
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the same error.
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] RepceClient: call failed call=26469:139794524604224:1535440781.44 method=init error=OSError
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
Gluster-***@gluster.org<mailto:Gluster-***@gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-users<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
Kotresh Hiremath Ravishankar
2018-09-03 04:48:40 UTC
Permalink
Hi Krishna,

Indexing is the feature used by Hybrid crawl which only makes crawl faster.
It has nothing to do with missing data sync.
Could you please share the complete log file of the session where the issue
is encountered ?

Thanks,
Kotresh HR
Post by Krishna Verma
Hi Kotresh/Support,
Request your help to get it fix. My slave is not getting sync with master.
When I restart the session after doing the indexing off then only it shows
the file at slave but that is also blank with zero size.
At master: file size is 5.8 GB.
17020_GPLV3.tar.gz
5.8G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
But at slave, after doing the “indexing off” and restart the session and
then wait for 2 days. It shows only 4.9 GB copied.
17020_GPLV3.tar.gz
4.9G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
Similarly, I tested for small file of size 1.2 GB only that is still
showing “0” size at slave after days waiting time.
1.2G rflowTestInt18.08-b001.t.Z
0 rflowTestInt18.08-b001.t.Z
Volume Name: glusterdist
Type: Distribute
Volume ID: af5b2915-7170-4b5e-aee8-7e68757b9bf1
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Brick1: gluster-poc-noida:/data/gluster-dist/distvol
Brick2: noi-poc-gluster:/data/gluster-dist/distvol
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
transport.address-family: inet
nfs.disable: on
Please help to fix, I believe its not a normal behavior of gluster rsync.
/Krishna
*From:* Krishna Verma
*Sent:* Friday, August 31, 2018 12:42 PM
*Subject:* RE: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
Hi Kotresh,
I have tested the geo replication over distributed volumes with 2*2 gluster setup.
gluster-poc-sj::glusterdist status
MASTER NODE MASTER VOL MASTER BRICK SLAVE
USER SLAVE SLAVE NODE STATUS CRAWL
STATUS LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
---------------------------------------------------------
gluster-poc-noida glusterdist /data/gluster-dist/distvol
root gluster-poc-sj::glusterdist gluster-poc-sj Active
Changelog Crawl 2018-08-31 10:28:19
noi-poc-gluster glusterdist /data/gluster-dist/distvol
root gluster-poc-sj::glusterdist gluster-poc-sj2 Active
History Crawl N/A
Not at client I copied a 848MB file from local disk to master mounted
volume and it took only 1 minute and 15 seconds. Its great
.
But even after waited for 2 hrs I was unable to see that file at slave
site. Then I again erased the indexing by doing “gluster volume set
glusterdist indexing off” and restart the session. Magically I received
the file instantly at slave after doing this.
Why I need to do “indexing off” every time to reflect data at slave site?
Is there any fix/workaround of it?
/Krishna
*Sent:* Friday, August 31, 2018 10:10 AM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Kotresh,
Yes, this include the time take to write 1GB file to master. geo-rep was
not stopped while the data was copying to master.
This way, you can't really measure how much time geo-rep took.
But now I am trouble, My putty session was timed out while copying data to
master and geo replication was active. After I restart putty session My
Master data is not syncing with slave. Its Last_synced time is 1hrs behind
the current time.
I restart the geo rep and also delete and again create the session but its
“LAST_SYNCED” time is same.
Unless, geo-rep is Faulty, it would be processing/syncing. You should
check logs for any errors.
Please help in this.

. It's better if gluster volume has more distribute count like 3*3 or
4*3 :- Are you refereeing to create a distributed volume with 3 master
node and 3 slave node?
Yes, that's correct. Please do the test with this. I recommend you to run
the actual workload for which you are planning to use gluster instead of
copying 1GB file and testing.
/krishna
*Sent:* Thursday, August 30, 2018 3:20 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Kotresh,
After fix the library link on node "noi-poc-gluster ", the status of one
mater node is “Active” and another is “Passive”. Can I setup both the
master as “Active” ?
Nope, since it's replica, it's redundant to sync same files from two
nodes. Both replicas can't be Active.
Also, when I copy a 1GB size of file from gluster client to master gluster
volume which is replicated with the slave volume, it tooks 35 minutes and
49 seconds. Is there any way to reduce its time taken to rsync data.
How did you measure this time? Does this include the time take for you to
write 1GB file to master?
There are two aspects to consider while measuring this.
1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.
In your case, since the setup is 1*2 and only one geo-rep worker is
Active, Step2 above equals to time for step1 + network transfer time.
You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master. It's one way.
2. Or stop geo-rep until the 1GB file is written to master and then start
geo-rep to get actual geo-rep time.
To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like 3*3 or 4*3
It will help in two ways.
1. The files gets distributed on master to multiple bricks
2. So above will help geo-rep as files on multiple bricks are
synced in parallel (multiple Actives)
NOTE: Gluster master server and one client is in Noida, India Location.
Gluster Slave server and one client is in USA.
Our approach is to transfer data from Noida gluster client will reach to
the USA gluster client in a minimum time. Please suggest the best approach
to achieve it.
Thu Aug 30 12:26:26 IST 2018
sending incremental file list
gentoo_root.img
1.07G 100% 490.70kB/s 0:35:36 (xfr#1, to-chk=0/1)
Is this I/O time to write to master volume?
sent 1.07G bytes received 35 bytes 499.65K bytes/sec
total size is 1.07G speedup is 1.00
Thu Aug 30 13:02:15 IST 2018
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL
STATUS LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
---------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
ssh://gluster-poc-sj::glusterep gluster-poc-sj Active Changelog
Crawl 2018-08-30 13:42:18
noi-poc-gluster glusterep /data/gluster/gv0 root
ssh://gluster-poc-sj::glusterep gluster-poc-sj Passive
N/A N/A
Thanks in advance for your all time support.
/Krishna
*Sent:* Thursday, August 30, 2018 10:51 AM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.
-Kotresh HR
Hi Kotresh,
Thank you so much for you input. Geo-replication is now showing “Active”
atleast for 1 master node. But its still at faulty state for the 2nd
master server.
Below is the detail.
glusterep gluster-poc-sj::glusterep status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
--------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl
2018-08-29 23:56:06
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A
N/A
Status of volume: glusterep
Gluster process TCP Port RDMA Port Online
Pid
------------------------------------------------------------
------------------
Brick gluster-poc-noida:/data/gluster/gv0 49152 0 Y
22463
Brick noi-poc-gluster:/data/gluster/gv0 49152 0 Y
19471
Self-heal Daemon on localhost N/A N/A Y
32087
Self-heal Daemon on noi-poc-gluster N/A N/A Y
6272
Task Status of Volume glusterep
------------------------------------------------------------
------------------
There are no active volume tasks
Volume Name: glusterep
Type: Replicate
Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: gluster-poc-noida:/data/gluster/gv0
Brick2: noi-poc-gluster:/data/gluster/gv0
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
Could you please help me in that also please?
It would be really a great help from your side.
/Krishna
*Sent:* Wednesday, August 29, 2018 10:47 AM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Answer inline
Hi Kotresh,
I created the links before. Below is the detail.
lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so ->
/usr/lib64/libgfchangelog.so.1
The link created is pointing to wrong library. Please fix this
#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so
lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 ->
libgfchangelog.so.0.0.1
-rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
Is it looks good what we exactly need or di I need to create any more link
or How to get “libgfchangelog.so” file if missing.
/Krishna
*Sent:* Tuesday, August 28, 2018 4:22 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which
is what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is
present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"
It should be something like below.
#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la
<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so -> libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1
Hi Kotresh,
Thanks for the response, I did that also but nothing changed.
libgfchangelog.so.0 (libc6,x86-64) =>
/usr/lib64/libgfchangelog.so.0
gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
-----------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty
N/A N/A
/Krishna
*Sent:* Tuesday, August 28, 2018 4:00 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for
geo-replication. But again same errors in the logs.
Post by Krishna Verma
I have attaching the config files and logs here.
gluster-poc-sj::glusterep stop
Post by Krishna Verma
Stopping geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep delete
Post by Krishna Verma
Deleting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep create push-pem force
Post by Krishna Verma
Creating geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep start
Post by Krishna Verma
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
gluster-poc-sj::glusterep start
Post by Krishna Verma
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
syncdaemon/repce.py
gluster-poc-sj::glusterep start
Post by Krishna Verma
Starting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
Post by Krishna Verma
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
Post by Krishna Verma
------------------------------------------------------------
------------------------------------------------------------
-----------------------------
Post by Krishna Verma
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
Post by Krishna Verma
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
Post by Krishna Verma
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
Post by Krishna Verma
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.
gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=
aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_
6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=
fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not
can you please apply that.
Post by Krishna Verma
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.
Post by Krishna Verma
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the
same error.
Post by Krishna Verma
Post by Krishna Verma
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker
/data/gluster/gv0):197:__call__] RepceClient: call failed
call=26469:139794524604224:1535440781.44 method=init
error=OSError
Post by Krishna Verma
Post by Krishna Verma
Post by Krishna Verma
[2018-08-28 07:19:41.447041] E [syncdutils(worker
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent
/data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching
EOF.
Post by Krishna Verma
Post by Krishna Verma
Post by Krishna Verma
[2018-08-28 07:19:42.440184] I [monitor(monitor):272:monitor]
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
Krishna Verma
2018-09-03 07:12:01 UTC
Permalink
Hi Kotresh,

Please find the log files attached.

Request you to please have a look.

/Krishna



From: Kotresh Hiremath Ravishankar <***@redhat.com>
Sent: Monday, September 3, 2018 10:19 AM
To: Krishna Verma <***@cadence.com>
Cc: Sunny Kumar <***@redhat.com>; Gluster Users <gluster-***@gluster.org>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
Indexing is the feature used by Hybrid crawl which only makes crawl faster. It has nothing to do with missing data sync.
Could you please share the complete log file of the session where the issue is encountered ?
Thanks,
Kotresh HR

On Mon, Sep 3, 2018 at 9:33 AM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh/Support,

Request your help to get it fix. My slave is not getting sync with master. When I restart the session after doing the indexing off then only it shows the file at slave but that is also blank with zero size.

At master: file size is 5.8 GB.

[***@gluster-poc-noida distvol]# du -sh 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
5.8G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[***@gluster-poc-noida distvol]#

But at slave, after doing the “indexing off” and restart the session and then wait for 2 days. It shows only 4.9 GB copied.

[***@gluster-poc-sj distvol]# du -sh 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
4.9G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[***@gluster-poc-sj distvol]#

Similarly, I tested for small file of size 1.2 GB only that is still showing “0” size at slave after days waiting time.

At Master:

[***@gluster-poc-noida distvol]# du -sh rflowTestInt18.08-b001.t.Z
1.2G rflowTestInt18.08-b001.t.Z
[***@gluster-poc-noida distvol]#

At Slave:

[***@gluster-poc-sj distvol]# du -sh rflowTestInt18.08-b001.t.Z
0 rflowTestInt18.08-b001.t.Z
[***@gluster-poc-sj distvol]#

Below is my distributed volume info :

[***@gluster-poc-noida distvol]# gluster volume info glusterdist

Volume Name: glusterdist
Type: Distribute
Volume ID: af5b2915-7170-4b5e-aee8-7e68757b9bf1
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster-dist/distvol
Brick2: noi-poc-gluster:/data/gluster-dist/distvol
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
transport.address-family: inet
nfs.disable: on
[***@gluster-poc-noida distvol]#

Please help to fix, I believe its not a normal behavior of gluster rsync.

/Krishna
From: Krishna Verma
Sent: Friday, August 31, 2018 12:42 PM
To: 'Kotresh Hiremath Ravishankar' <***@redhat.com<mailto:***@redhat.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: RE: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

Hi Kotresh,

I have tested the geo replication over distributed volumes with 2*2 gluster setup.

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj Active Changelog Crawl 2018-08-31 10:28:19
noi-poc-gluster glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj2 Active History Crawl N/A
[***@gluster-poc-noida ~]#

Not at client I copied a 848MB file from local disk to master mounted volume and it took only 1 minute and 15 seconds. Its great
.

But even after waited for 2 hrs I was unable to see that file at slave site. Then I again erased the indexing by doing “gluster volume set glusterdist indexing off” and restart the session. Magically I received the file instantly at slave after doing this.

Why I need to do “indexing off” every time to reflect data at slave site? Is there any fix/workaround of it?

/Krishna


From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Friday, August 31, 2018 10:10 AM
To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 3:51 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Yes, this include the time take to write 1GB file to master. geo-rep was not stopped while the data was copying to master.

This way, you can't really measure how much time geo-rep took.


But now I am trouble, My putty session was timed out while copying data to master and geo replication was active. After I restart putty session My Master data is not syncing with slave. Its Last_synced time is 1hrs behind the current time.

I restart the geo rep and also delete and again create the session but its “LAST_SYNCED” time is same.

Unless, geo-rep is Faulty, it would be processing/syncing. You should check logs for any errors.


Please help in this.


. It's better if gluster volume has more distribute count like 3*3 or 4*3 :- Are you refereeing to create a distributed volume with 3 master node and 3 slave node?

Yes, that's correct. Please do the test with this. I recommend you to run the actual workload for which you are planning to use gluster instead of copying 1GB file and testing.



/krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Thursday, August 30, 2018 3:20 PM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 1:52 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

After fix the library link on node "noi-poc-gluster ", the status of one mater node is “Active” and another is “Passive”. Can I setup both the master as “Active” ?

Nope, since it's replica, it's redundant to sync same files from two nodes. Both replicas can't be Active.


Also, when I copy a 1GB size of file from gluster client to master gluster volume which is replicated with the slave volume, it tooks 35 minutes and 49 seconds. Is there any way to reduce its time taken to rsync data.

How did you measure this time? Does this include the time take for you to write 1GB file to master?
There are two aspects to consider while measuring this.

1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.

In your case, since the setup is 1*2 and only one geo-rep worker is Active, Step2 above equals to time for step1 + network transfer time.

You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master. It's one way.
2. Or stop geo-rep until the 1GB file is written to master and then start geo-rep to get actual geo-rep time.

To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like 3*3 or 4*3
It will help in two ways.
1. The files gets distributed on master to multiple bricks
2. So above will help geo-rep as files on multiple bricks are synced in parallel (multiple Actives)

NOTE: Gluster master server and one client is in Noida, India Location.
Gluster Slave server and one client is in USA.

Our approach is to transfer data from Noida gluster client will reach to the USA gluster client in a minimum time. Please suggest the best approach to achieve it.

[***@noi-dcops ~]# date ; rsync -avh --progress /tmp/gentoo_root.img /glusterfs/ ; date
Thu Aug 30 12:26:26 IST 2018
sending incremental file list
gentoo_root.img
1.07G 100% 490.70kB/s 0:35:36 (xfr#1, to-chk=0/1)

Is this I/O time to write to master volume?

sent 1.07G bytes received 35 bytes 499.65K bytes/sec
total size is 1.07G speedup is 1.00
Thu Aug 30 13:02:15 IST 2018
[***@noi-dcops ~]#



[***@gluster-poc-noida gluster]# gluster volume geo-replication status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root ssh://gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-30 13:42:18
noi-poc-gluster glusterep /data/gluster/gv0 root ssh://gluster-poc-sj::glusterep gluster-poc-sj Passive N/A N/A
[***@gluster-poc-noida gluster]#

Thanks in advance for your all time support.

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Thursday, August 30, 2018 10:51 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.
-Kotresh HR

On Thu, Aug 30, 2018 at 12:17 AM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Thank you so much for you input. Geo-replication is now showing “Active” atleast for 1 master node. But its still at faulty state for the 2nd master server.

Below is the detail.

[***@gluster-poc-noida glusterfs]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-29 23:56:06
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A


[***@gluster-poc-noida glusterfs]# gluster volume status
Status of volume: glusterep
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster-poc-noida:/data/gluster/gv0 49152 0 Y 22463
Brick noi-poc-gluster:/data/gluster/gv0 49152 0 Y 19471
Self-heal Daemon on localhost N/A N/A Y 32087
Self-heal Daemon on noi-poc-gluster N/A N/A Y 6272

Task Status of Volume glusterep
------------------------------------------------------------------------------
There are no active volume tasks



[***@gluster-poc-noida glusterfs]# gluster volume info

Volume Name: glusterep
Type: Replicate
Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster/gv0
Brick2: noi-poc-gluster:/data/gluster/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
[***@gluster-poc-noida glusterfs]#

Could you please help me in that also please?

It would be really a great help from your side.

/Krishna
From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Wednesday, August 29, 2018 10:47 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Answer inline

On Tue, Aug 28, 2018 at 4:28 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

I created the links before. Below is the detail.

[***@gluster-poc-noida ~]# ls -l /usr/lib64 | grep libgfch
lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1

The link created is pointing to wrong library. Please fix this

#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so

lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]# locate libgfchangelog.so
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]#

Is it looks good what we exactly need or di I need to create any more link or How to get “libgfchangelog.so” file if missing.

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:22 PM
To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which is what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"

It should be something like below.

#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so -> libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1

On Tue, Aug 28, 2018 at 4:04 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Thanks for the response, I did that also but nothing changed.

[***@gluster-poc-noida ~]# ldconfig /usr/lib64
[***@gluster-poc-noida ~]# ldconfig -p | grep libgfchangelog
libgfchangelog.so.0 (libc6,x86-64) => /usr/lib64/libgfchangelog.so.0
[***@gluster-poc-noida ~]#

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
[***@gluster-poc-noida ~]#

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:00 PM
To: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>
Cc: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog


On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar <***@redhat.com<mailto:***@redhat.com>> wrote:
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for geo-replication. But again same errors in the logs.
I have attaching the config files and logs here.
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Deleting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Creating geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not can you please apply that.
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0 /usr/lib64/libgfchangelog.so.
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the same error.
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] RepceClient: call failed call=26469:139794524604224:1535440781.44 method=init error=OSError
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
Gluster-***@gluster.org<mailto:Gluster-***@gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-users<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
Kotresh Hiremath Ravishankar
2018-09-03 07:14:29 UTC
Permalink
Hi Krishna,

The log is not complete. If you are re-trying, could you please try it out
on 4.1.3 and share the logs.

Thanks,
Kotresh HR
Post by Krishna Verma
Hi Kotresh,
Please find the log files attached.
Request you to please have a look.
/Krishna
*Sent:* Monday, September 3, 2018 10:19 AM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
Indexing is the feature used by Hybrid crawl which only makes crawl
faster. It has nothing to do with missing data sync.
Could you please share the complete log file of the session where the
issue is encountered ?
Thanks,
Kotresh HR
Hi Kotresh/Support,
Request your help to get it fix. My slave is not getting sync with master.
When I restart the session after doing the indexing off then only it shows
the file at slave but that is also blank with zero size.
At master: file size is 5.8 GB.
17020_GPLV3.tar.gz
5.8G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
But at slave, after doing the “indexing off” and restart the session and
then wait for 2 days. It shows only 4.9 GB copied.
17020_GPLV3.tar.gz
4.9G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
Similarly, I tested for small file of size 1.2 GB only that is still
showing “0” size at slave after days waiting time.
1.2G rflowTestInt18.08-b001.t.Z
0 rflowTestInt18.08-b001.t.Z
Volume Name: glusterdist
Type: Distribute
Volume ID: af5b2915-7170-4b5e-aee8-7e68757b9bf1
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Brick1: gluster-poc-noida:/data/gluster-dist/distvol
Brick2: noi-poc-gluster:/data/gluster-dist/distvol
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
transport.address-family: inet
nfs.disable: on
Please help to fix, I believe its not a normal behavior of gluster rsync.
/Krishna
*From:* Krishna Verma
*Sent:* Friday, August 31, 2018 12:42 PM
*Subject:* RE: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
Hi Kotresh,
I have tested the geo replication over distributed volumes with 2*2 gluster setup.
gluster-poc-sj::glusterdist status
MASTER NODE MASTER VOL MASTER BRICK SLAVE
USER SLAVE SLAVE NODE STATUS CRAWL
STATUS LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
---------------------------------------------------------
gluster-poc-noida glusterdist /data/gluster-dist/distvol
root gluster-poc-sj::glusterdist gluster-poc-sj Active
Changelog Crawl 2018-08-31 10:28:19
noi-poc-gluster glusterdist /data/gluster-dist/distvol
root gluster-poc-sj::glusterdist gluster-poc-sj2 Active
History Crawl N/A
Not at client I copied a 848MB file from local disk to master mounted
volume and it took only 1 minute and 15 seconds. Its great
.
But even after waited for 2 hrs I was unable to see that file at slave
site. Then I again erased the indexing by doing “gluster volume set
glusterdist indexing off” and restart the session. Magically I received
the file instantly at slave after doing this.
Why I need to do “indexing off” every time to reflect data at slave site?
Is there any fix/workaround of it?
/Krishna
*Sent:* Friday, August 31, 2018 10:10 AM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Kotresh,
Yes, this include the time take to write 1GB file to master. geo-rep was
not stopped while the data was copying to master.
This way, you can't really measure how much time geo-rep took.
But now I am trouble, My putty session was timed out while copying data to
master and geo replication was active. After I restart putty session My
Master data is not syncing with slave. Its Last_synced time is 1hrs behind
the current time.
I restart the geo rep and also delete and again create the session but its
“LAST_SYNCED” time is same.
Unless, geo-rep is Faulty, it would be processing/syncing. You should
check logs for any errors.
Please help in this.

. It's better if gluster volume has more distribute count like 3*3 or
4*3 :- Are you refereeing to create a distributed volume with 3 master
node and 3 slave node?
Yes, that's correct. Please do the test with this. I recommend you to run
the actual workload for which you are planning to use gluster instead of
copying 1GB file and testing.
/krishna
*Sent:* Thursday, August 30, 2018 3:20 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Kotresh,
After fix the library link on node "noi-poc-gluster ", the status of one
mater node is “Active” and another is “Passive”. Can I setup both the
master as “Active” ?
Nope, since it's replica, it's redundant to sync same files from two
nodes. Both replicas can't be Active.
Also, when I copy a 1GB size of file from gluster client to master gluster
volume which is replicated with the slave volume, it tooks 35 minutes and
49 seconds. Is there any way to reduce its time taken to rsync data.
How did you measure this time? Does this include the time take for you to
write 1GB file to master?
There are two aspects to consider while measuring this.
1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.
In your case, since the setup is 1*2 and only one geo-rep worker is
Active, Step2 above equals to time for step1 + network transfer time.
You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master. It's one way.
2. Or stop geo-rep until the 1GB file is written to master and then start
geo-rep to get actual geo-rep time.
To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like 3*3 or 4*3
It will help in two ways.
1. The files gets distributed on master to multiple bricks
2. So above will help geo-rep as files on multiple bricks are
synced in parallel (multiple Actives)
NOTE: Gluster master server and one client is in Noida, India Location.
Gluster Slave server and one client is in USA.
Our approach is to transfer data from Noida gluster client will reach to
the USA gluster client in a minimum time. Please suggest the best approach
to achieve it.
Thu Aug 30 12:26:26 IST 2018
sending incremental file list
gentoo_root.img
1.07G 100% 490.70kB/s 0:35:36 (xfr#1, to-chk=0/1)
Is this I/O time to write to master volume?
sent 1.07G bytes received 35 bytes 499.65K bytes/sec
total size is 1.07G speedup is 1.00
Thu Aug 30 13:02:15 IST 2018
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL
STATUS LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
---------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
ssh://gluster-poc-sj::glusterep gluster-poc-sj Active Changelog
Crawl 2018-08-30 13:42:18
noi-poc-gluster glusterep /data/gluster/gv0 root
ssh://gluster-poc-sj::glusterep gluster-poc-sj Passive
N/A N/A
Thanks in advance for your all time support.
/Krishna
*Sent:* Thursday, August 30, 2018 10:51 AM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.
-Kotresh HR
Hi Kotresh,
Thank you so much for you input. Geo-replication is now showing “Active”
atleast for 1 master node. But its still at faulty state for the 2nd
master server.
Below is the detail.
glusterep gluster-poc-sj::glusterep status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
--------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl
2018-08-29 23:56:06
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A
N/A
Status of volume: glusterep
Gluster process TCP Port RDMA Port Online
Pid
------------------------------------------------------------
------------------
Brick gluster-poc-noida:/data/gluster/gv0 49152 0 Y
22463
Brick noi-poc-gluster:/data/gluster/gv0 49152 0 Y
19471
Self-heal Daemon on localhost N/A N/A Y
32087
Self-heal Daemon on noi-poc-gluster N/A N/A Y
6272
Task Status of Volume glusterep
------------------------------------------------------------
------------------
There are no active volume tasks
Volume Name: glusterep
Type: Replicate
Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: gluster-poc-noida:/data/gluster/gv0
Brick2: noi-poc-gluster:/data/gluster/gv0
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
Could you please help me in that also please?
It would be really a great help from your side.
/Krishna
*Sent:* Wednesday, August 29, 2018 10:47 AM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Answer inline
Hi Kotresh,
I created the links before. Below is the detail.
lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so ->
/usr/lib64/libgfchangelog.so.1
The link created is pointing to wrong library. Please fix this
#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so
lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 ->
libgfchangelog.so.0.0.1
-rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
Is it looks good what we exactly need or di I need to create any more link
or How to get “libgfchangelog.so” file if missing.
/Krishna
*Sent:* Tuesday, August 28, 2018 4:22 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which
is what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is
present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"
It should be something like below.
#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la
<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so -> libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1
Hi Kotresh,
Thanks for the response, I did that also but nothing changed.
libgfchangelog.so.0 (libc6,x86-64) =>
/usr/lib64/libgfchangelog.so.0
gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
-----------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty
N/A N/A
/Krishna
*Sent:* Tuesday, August 28, 2018 4:00 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for
geo-replication. But again same errors in the logs.
Post by Krishna Verma
I have attaching the config files and logs here.
gluster-poc-sj::glusterep stop
Post by Krishna Verma
Stopping geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep delete
Post by Krishna Verma
Deleting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep create push-pem force
Post by Krishna Verma
Creating geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep start
Post by Krishna Verma
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
gluster-poc-sj::glusterep start
Post by Krishna Verma
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
syncdaemon/repce.py
gluster-poc-sj::glusterep start
Post by Krishna Verma
Starting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
Post by Krishna Verma
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
Post by Krishna Verma
------------------------------------------------------------
------------------------------------------------------------
-----------------------------
Post by Krishna Verma
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
Post by Krishna Verma
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
Post by Krishna Verma
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
Post by Krishna Verma
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.
gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=
aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_
6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=
fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not
can you please apply that.
Post by Krishna Verma
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.
Post by Krishna Verma
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the
same error.
Post by Krishna Verma
Post by Krishna Verma
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker
/data/gluster/gv0):197:__call__] RepceClient: call failed
call=26469:139794524604224:1535440781.44 method=init
error=OSError
Post by Krishna Verma
Post by Krishna Verma
Post by Krishna Verma
[2018-08-28 07:19:41.447041] E [syncdutils(worker
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent
/data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching
EOF.
Post by Krishna Verma
Post by Krishna Verma
Post by Krishna Verma
[2018-08-28 07:19:42.440184] I [monitor(monitor):272:monitor]
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
Krishna Verma
2018-09-03 07:32:25 UTC
Permalink
Hi Kotesh,

Below is the cat output of gsyncd.log file generating on my master server.

And I am using 4.1.3 version only all my gluster nodes.
[***@gluster-poc-noida distvol]# gluster --version | grep glusterfs
glusterfs 4.1.3


[***@gluster-poc-noida distvol]# cat /var/log/glusterfs/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.log
[2018-09-03 04:01:52.424609] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 04:01:52.526323] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:55:41.326411] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:55:49.676120] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:55:50.406042] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:56:52.847537] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:57:03.778448] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:57:25.86958] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:57:25.855273] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:58:09.294239] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:59:39.255487] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:59:39.355753] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:00:26.311767] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:03:29.205226] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:03:30.131258] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:10:34.679677] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:10:35.653928] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:26:24.438854] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:26:25.495117] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.159113] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.216475] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.932451] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.988286] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.992789] E [syncdutils(worker /data/gluster-dist/distvol):305:log_raise_exception] <top>: connection to peer is broken
[2018-09-03 07:27:26.994750] E [syncdutils(worker /data/gluster-dist/distvol):801:errlog] Popen: command returned error cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-X8iHv1/86bcbaf188167a3859c3267081671312.sock gluster-poc-sj /nonexistent/gsyncd slave glusterdist gluster-poc-sj::glusterdist --master-node gluster-poc-noida --master-node-id 098c16c6-8dff-490a-a2e8-c8cb328fcbb3 --master-brick /data/gluster-dist/distvol --local-node gluster-poc-sj --local-node-id e54f2759-4c56-40dd-89e1-e10c3037d48b --slave-timeout 120 --slave-log-level INFO --slave-gluster-log-level INFO --slave-gluster-command-dir /usr/local/sbin/ error=255
[2018-09-03 07:27:26.994971] E [syncdutils(worker /data/gluster-dist/distvol):805:logerr] Popen: ssh> Killed by signal 15.
[2018-09-03 07:27:27.7174] I [repce(agent /data/gluster-dist/distvol):80:service_loop] RepceServer: terminating on reaching EOF.
[2018-09-03 07:27:27.15156] I [gsyncdstatus(monitor):244:set_worker_status] GeorepStatus: Worker Status Change status=Faulty
[2018-09-03 07:27:28.52725] I [gsyncd(monitor-status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:28.64521] I [subcmds(monitor-status):19:subcmd_monitor_status] <top>: Monitor Status Change status=Stopped
[2018-09-03 07:27:35.345937] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:35.444247] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:36.181122] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:36.281459] I [gsyncd(monitor):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:39.782480] I [gsyncdstatus(monitor):244:set_worker_status] GeorepStatus: Worker Status Change status=Initializing...
[2018-09-03 07:27:40.321157] I [monitor(monitor):158:monitor] Monitor: starting gsyncd worker brick=/data/gluster-dist/distvol slave_node=gluster-poc-sj
[2018-09-03 07:27:40.376172] I [gsyncd(agent /data/gluster-dist/distvol):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:40.377144] I [changelogagent(agent /data/gluster-dist/distvol):72:__init__] ChangelogAgent: Agent listining...
[2018-09-03 07:27:40.378150] I [gsyncd(worker /data/gluster-dist/distvol):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:40.391185] I [resource(worker /data/gluster-dist/distvol):1377:connect_remote] SSH: Initializing SSH connection between master and slave...
[2018-09-03 07:27:43.752819] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:43.848619] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:45.365627] I [resource(worker /data/gluster-dist/distvol):1424:connect_remote] SSH: SSH connection between master and slave established. duration=4.9743
[2018-09-03 07:27:45.365866] I [resource(worker /data/gluster-dist/distvol):1096:connect] GLUSTER: Mounting gluster volume locally...
[2018-09-03 07:27:46.388974] I [resource(worker /data/gluster-dist/distvol):1119:connect] GLUSTER: Mounted gluster volume duration=1.0230
[2018-09-03 07:27:46.389206] I [subcmds(worker /data/gluster-dist/distvol):70:subcmd_worker] <top>: Worker spawn successful. Acknowledging back to monitor
[2018-09-03 07:27:48.401196] I [master(worker /data/gluster-dist/distvol):1593:register] _GMaster: Working dir path=/var/lib/misc/gluster/gsyncd/glusterdist_gluster-poc-sj_glusterdist/data-gluster-dist-distvol
[2018-09-03 07:27:48.401477] I [resource(worker /data/gluster-dist/distvol):1282:service_loop] GLUSTER: Register time time=1535959668
[2018-09-03 07:27:49.176095] I [gsyncdstatus(worker /data/gluster-dist/distvol):277:set_active] GeorepStatus: Worker Status Change status=Active
[2018-09-03 07:27:49.177079] I [gsyncdstatus(worker /data/gluster-dist/distvol):249:set_worker_crawl_status] GeorepStatus: Crawl Status Change status=History Crawl
[2018-09-03 07:27:49.177339] I [master(worker /data/gluster-dist/distvol):1507:crawl] _GMaster: starting history crawl turns=1 stime=(1535701378, 0) entry_stime=(1535701378, 0) etime=1535959669
[2018-09-03 07:27:50.179210] I [master(worker /data/gluster-dist/distvol):1536:crawl] _GMaster: slave's time stime=(1535701378, 0)
[2018-09-03 07:27:51.300096] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:51.399027] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:52.510271] I [master(worker /data/gluster-dist/distvol):1944:syncjob] Syncer: Sync Time Taken duration=1.6146 num_files=1 job=2 return_code=0
[2018-09-03 07:27:52.514487] I [master(worker /data/gluster-dist/distvol):1374:process] _GMaster: Entry Time Taken MKD=0 MKN=0 LIN=0 SYM=0 REN=1 RMD=0 CRE=0 duration=0.2745 UNL=0
[2018-09-03 07:27:52.514615] I [master(worker /data/gluster-dist/distvol):1384:process] _GMaster: Data/Metadata Time Taken SETA=1 SETX=0 meta_duration=0.2691 data_duration=1.7883 DATA=1 XATT=0
[2018-09-03 07:27:52.514844] I [master(worker /data/gluster-dist/distvol):1394:process] _GMaster: Batch Completed changelog_end=1535701379entry_stime=(1535701378, 0) changelog_start=1535701379 stime=(1535701378, 0) duration=2.3353 num_changelogs=1 mode=history_changelog
[2018-09-03 07:27:52.515224] I [master(worker /data/gluster-dist/distvol):1552:crawl] _GMaster: finished history crawl endtime=1535959662 stime=(1535701378, 0) entry_stime=(1535701378, 0)
[2018-09-03 07:28:01.706876] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:28:01.803858] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:28:03.521949] I [master(worker /data/gluster-dist/distvol):1507:crawl] _GMaster: starting history crawl turns=2 stime=(1535701378, 0) entry_stime=(1535701378, 0) etime=1535959683
[2018-09-03 07:28:03.523086] I [master(worker /data/gluster-dist/distvol):1552:crawl] _GMaster: finished history crawl endtime=1535959677 stime=(1535701378, 0) entry_stime=(1535701378, 0)
[2018-09-03 07:28:04.62274] I [gsyncdstatus(worker /data/gluster-dist/distvol):249:set_worker_crawl_status] GeorepStatus: Crawl Status Change status=Changelog Crawl
[***@gluster-poc-noida distvol]#

From: Kotresh Hiremath Ravishankar <***@redhat.com>
Sent: Monday, September 3, 2018 12:44 PM
To: Krishna Verma <***@cadence.com>
Cc: Sunny Kumar <***@redhat.com>; Gluster Users <gluster-***@gluster.org>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
The log is not complete. If you are re-trying, could you please try it out on 4.1.3 and share the logs.
Thanks,
Kotresh HR

On Mon, Sep 3, 2018 at 12:42 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Please find the log files attached.

Request you to please have a look.

/Krishna



From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Monday, September 3, 2018 10:19 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
Indexing is the feature used by Hybrid crawl which only makes crawl faster. It has nothing to do with missing data sync.
Could you please share the complete log file of the session where the issue is encountered ?
Thanks,
Kotresh HR

On Mon, Sep 3, 2018 at 9:33 AM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh/Support,

Request your help to get it fix. My slave is not getting sync with master. When I restart the session after doing the indexing off then only it shows the file at slave but that is also blank with zero size.

At master: file size is 5.8 GB.

[***@gluster-poc-noida distvol]# du -sh 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
5.8G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[***@gluster-poc-noida distvol]#

But at slave, after doing the “indexing off” and restart the session and then wait for 2 days. It shows only 4.9 GB copied.

[***@gluster-poc-sj distvol]# du -sh 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
4.9G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[***@gluster-poc-sj distvol]#

Similarly, I tested for small file of size 1.2 GB only that is still showing “0” size at slave after days waiting time.

At Master:

[***@gluster-poc-noida distvol]# du -sh rflowTestInt18.08-b001.t.Z
1.2G rflowTestInt18.08-b001.t.Z
[***@gluster-poc-noida distvol]#

At Slave:

[***@gluster-poc-sj distvol]# du -sh rflowTestInt18.08-b001.t.Z
0 rflowTestInt18.08-b001.t.Z
[***@gluster-poc-sj distvol]#

Below is my distributed volume info :

[***@gluster-poc-noida distvol]# gluster volume info glusterdist

Volume Name: glusterdist
Type: Distribute
Volume ID: af5b2915-7170-4b5e-aee8-7e68757b9bf1
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster-dist/distvol
Brick2: noi-poc-gluster:/data/gluster-dist/distvol
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
transport.address-family: inet
nfs.disable: on
[***@gluster-poc-noida distvol]#

Please help to fix, I believe its not a normal behavior of gluster rsync.

/Krishna
From: Krishna Verma
Sent: Friday, August 31, 2018 12:42 PM
To: 'Kotresh Hiremath Ravishankar' <***@redhat.com<mailto:***@redhat.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: RE: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

Hi Kotresh,

I have tested the geo replication over distributed volumes with 2*2 gluster setup.

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj Active Changelog Crawl 2018-08-31 10:28:19
noi-poc-gluster glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj2 Active History Crawl N/A
[***@gluster-poc-noida ~]#

Not at client I copied a 848MB file from local disk to master mounted volume and it took only 1 minute and 15 seconds. Its great
.

But even after waited for 2 hrs I was unable to see that file at slave site. Then I again erased the indexing by doing “gluster volume set glusterdist indexing off” and restart the session. Magically I received the file instantly at slave after doing this.

Why I need to do “indexing off” every time to reflect data at slave site? Is there any fix/workaround of it?

/Krishna


From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Friday, August 31, 2018 10:10 AM
To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 3:51 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Yes, this include the time take to write 1GB file to master. geo-rep was not stopped while the data was copying to master.

This way, you can't really measure how much time geo-rep took.


But now I am trouble, My putty session was timed out while copying data to master and geo replication was active. After I restart putty session My Master data is not syncing with slave. Its Last_synced time is 1hrs behind the current time.

I restart the geo rep and also delete and again create the session but its “LAST_SYNCED” time is same.

Unless, geo-rep is Faulty, it would be processing/syncing. You should check logs for any errors.


Please help in this.


. It's better if gluster volume has more distribute count like 3*3 or 4*3 :- Are you refereeing to create a distributed volume with 3 master node and 3 slave node?

Yes, that's correct. Please do the test with this. I recommend you to run the actual workload for which you are planning to use gluster instead of copying 1GB file and testing.



/krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Thursday, August 30, 2018 3:20 PM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 1:52 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

After fix the library link on node "noi-poc-gluster ", the status of one mater node is “Active” and another is “Passive”. Can I setup both the master as “Active” ?

Nope, since it's replica, it's redundant to sync same files from two nodes. Both replicas can't be Active.


Also, when I copy a 1GB size of file from gluster client to master gluster volume which is replicated with the slave volume, it tooks 35 minutes and 49 seconds. Is there any way to reduce its time taken to rsync data.

How did you measure this time? Does this include the time take for you to write 1GB file to master?
There are two aspects to consider while measuring this.

1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.

In your case, since the setup is 1*2 and only one geo-rep worker is Active, Step2 above equals to time for step1 + network transfer time.

You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master. It's one way.
2. Or stop geo-rep until the 1GB file is written to master and then start geo-rep to get actual geo-rep time.

To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like 3*3 or 4*3
It will help in two ways.
1. The files gets distributed on master to multiple bricks
2. So above will help geo-rep as files on multiple bricks are synced in parallel (multiple Actives)

NOTE: Gluster master server and one client is in Noida, India Location.
Gluster Slave server and one client is in USA.

Our approach is to transfer data from Noida gluster client will reach to the USA gluster client in a minimum time. Please suggest the best approach to achieve it.

[***@noi-dcops ~]# date ; rsync -avh --progress /tmp/gentoo_root.img /glusterfs/ ; date
Thu Aug 30 12:26:26 IST 2018
sending incremental file list
gentoo_root.img
1.07G 100% 490.70kB/s 0:35:36 (xfr#1, to-chk=0/1)

Is this I/O time to write to master volume?

sent 1.07G bytes received 35 bytes 499.65K bytes/sec
total size is 1.07G speedup is 1.00
Thu Aug 30 13:02:15 IST 2018
[***@noi-dcops ~]#



[***@gluster-poc-noida gluster]# gluster volume geo-replication status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root ssh://gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-30 13:42:18
noi-poc-gluster glusterep /data/gluster/gv0 root ssh://gluster-poc-sj::glusterep gluster-poc-sj Passive N/A N/A
[***@gluster-poc-noida gluster]#

Thanks in advance for your all time support.

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Thursday, August 30, 2018 10:51 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.
-Kotresh HR

On Thu, Aug 30, 2018 at 12:17 AM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Thank you so much for you input. Geo-replication is now showing “Active” atleast for 1 master node. But its still at faulty state for the 2nd master server.

Below is the detail.

[***@gluster-poc-noida glusterfs]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-29 23:56:06
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A


[***@gluster-poc-noida glusterfs]# gluster volume status
Status of volume: glusterep
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster-poc-noida:/data/gluster/gv0 49152 0 Y 22463
Brick noi-poc-gluster:/data/gluster/gv0 49152 0 Y 19471
Self-heal Daemon on localhost N/A N/A Y 32087
Self-heal Daemon on noi-poc-gluster N/A N/A Y 6272

Task Status of Volume glusterep
------------------------------------------------------------------------------
There are no active volume tasks



[***@gluster-poc-noida glusterfs]# gluster volume info

Volume Name: glusterep
Type: Replicate
Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster/gv0
Brick2: noi-poc-gluster:/data/gluster/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
[***@gluster-poc-noida glusterfs]#

Could you please help me in that also please?

It would be really a great help from your side.

/Krishna
From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Wednesday, August 29, 2018 10:47 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Answer inline

On Tue, Aug 28, 2018 at 4:28 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

I created the links before. Below is the detail.

[***@gluster-poc-noida ~]# ls -l /usr/lib64 | grep libgfch
lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1

The link created is pointing to wrong library. Please fix this

#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so

lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]# locate libgfchangelog.so
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]#

Is it looks good what we exactly need or di I need to create any more link or How to get “libgfchangelog.so” file if missing.

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:22 PM
To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which is what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"

It should be something like below.

#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so -> libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1

On Tue, Aug 28, 2018 at 4:04 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Thanks for the response, I did that also but nothing changed.

[***@gluster-poc-noida ~]# ldconfig /usr/lib64
[***@gluster-poc-noida ~]# ldconfig -p | grep libgfchangelog
libgfchangelog.so.0 (libc6,x86-64) => /usr/lib64/libgfchangelog.so.0
[***@gluster-poc-noida ~]#

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
[***@gluster-poc-noida ~]#

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:00 PM
To: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>
Cc: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog


On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar <***@redhat.com<mailto:***@redhat.com>> wrote:
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for geo-replication. But again same errors in the logs.
I have attaching the config files and logs here.
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Deleting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Creating geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not can you please apply that.
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0 /usr/lib64/libgfchangelog.so.
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the same error.
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] RepceClient: call failed call=26469:139794524604224:1535440781.44 method=init error=OSError
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
Gluster-***@gluster.org<mailto:Gluster-***@gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-users<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
Kotresh Hiremath Ravishankar
2018-09-03 09:47:05 UTC
Permalink
Hi krishna,

I see no error in the shared logs. The only errro messages I see are during
geo-rep stop. That is expected.
Could you share the steps you used to created geo-rep setup?

Thanks,
Kotresh HR
Post by Krishna Verma
Hi Kotesh,
Below is the cat output of gsyncd.log file generating on my master server.
And I am using 4.1.3 version only all my gluster nodes.
glusterfs 4.1.3
replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.log
[2018-09-03 04:01:52.424609] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 04:01:52.526323] I [gsyncd(status):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-
replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:55:41.326411] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:55:49.676120] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:55:50.406042] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:56:52.847537] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:57:03.778448] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:57:25.86958] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:57:25.855273] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:58:09.294239] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:59:39.255487] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:59:39.355753] I [gsyncd(status):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-
replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:00:26.311767] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:03:29.205226] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:03:30.131258] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:10:34.679677] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:10:35.653928] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:26:24.438854] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:26:25.495117] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.159113] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.216475] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.932451] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.988286] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.992789] E [syncdutils(worker
/data/gluster-dist/distvol):305:log_raise_exception] <top>: connection to
peer is broken
[2018-09-03 07:27:26.994750] E [syncdutils(worker
/data/gluster-dist/distvol):801:errlog] Popen: command returned error
cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto
-S /tmp/gsyncd-aux-ssh-X8iHv1/86bcbaf188167a3859c3267081671312.sock
gluster-poc-sj /nonexistent/gsyncd slave glusterdist
gluster-poc-sj::glusterdist --master-node gluster-poc-noida
--master-node-id 098c16c6-8dff-490a-a2e8-c8cb328fcbb3 --master-brick
/data/gluster-dist/distvol --local-node gluster-poc-sj --local-node-id
e54f2759-4c56-40dd-89e1-e10c3037d48b --slave-timeout 120
--slave-log-level INFO --slave-gluster-log-level INFO
--slave-gluster-command-dir /usr/local/sbin/ error=255
[2018-09-03 07:27:26.994971] E [syncdutils(worker
/data/gluster-dist/distvol):805:logerr] Popen: ssh> Killed by signal 15.
[2018-09-03 07:27:27.7174] I [repce(agent /data/gluster-dist/distvol):80:service_loop]
RepceServer: terminating on reaching EOF.
[2018-09-03 07:27:27.15156] I [gsyncdstatus(monitor):244:set_worker_status]
GeorepStatus: Worker Status Change status=Faulty
Using session config file path=/var/lib/glusterd/geo-
replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:28.64521] I [subcmds(monitor-status):19:subcmd_monitor_status]
<top>: Monitor Status Change status=Stopped
[2018-09-03 07:27:35.345937] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:35.444247] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:36.181122] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:36.281459] I [gsyncd(monitor):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-
replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:39.782480] I [gsyncdstatus(monitor):244:set_worker_status]
GeorepStatus: Worker Status Change status=Initializing...
starting gsyncd worker brick=/data/gluster-dist/distvol
slave_node=gluster-poc-sj
[2018-09-03 07:27:40.376172] I [gsyncd(agent /data/gluster-dist/distvol):297:main]
<top>: Using session config file path=/var/lib/glusterd/geo-
replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:40.377144] I [changelogagent(agent
/data/gluster-dist/distvol):72:__init__] ChangelogAgent: Agent
listining...
[2018-09-03 07:27:40.378150] I [gsyncd(worker /data/gluster-dist/distvol):297:main]
<top>: Using session config file path=/var/lib/glusterd/geo-
replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:40.391185] I [resource(worker
/data/gluster-dist/distvol):1377:connect_remote] SSH: Initializing SSH
connection between master and slave...
[2018-09-03 07:27:43.752819] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:43.848619] I [gsyncd(status):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-
replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:45.365627] I [resource(worker
/data/gluster-dist/distvol):1424:connect_remote] SSH: SSH connection
between master and slave established. duration=4.9743
[2018-09-03 07:27:45.365866] I [resource(worker
/data/gluster-dist/distvol):1096:connect] GLUSTER: Mounting gluster
volume locally...
[2018-09-03 07:27:46.388974] I [resource(worker
/data/gluster-dist/distvol):1119:connect] GLUSTER: Mounted gluster
volume duration=1.0230
[2018-09-03 07:27:46.389206] I [subcmds(worker /data/gluster-dist/distvol):70:subcmd_worker]
<top>: Worker spawn successful. Acknowledging back to monitor
[2018-09-03 07:27:48.401196] I [master(worker /data/gluster-dist/distvol):1593:register]
_GMaster: Working dir path=/var/lib/misc/gluster/
gsyncd/glusterdist_gluster-poc-sj_glusterdist/data-gluster-dist-distvol
[2018-09-03 07:27:48.401477] I [resource(worker
/data/gluster-dist/distvol):1282:service_loop] GLUSTER: Register time
time=1535959668
[2018-09-03 07:27:49.176095] I [gsyncdstatus(worker
/data/gluster-dist/distvol):277:set_active] GeorepStatus: Worker Status
Change status=Active
[2018-09-03 07:27:49.177079] I [gsyncdstatus(worker
Crawl Status Change status=History Crawl
[2018-09-03 07:27:49.177339] I [master(worker /data/gluster-dist/distvol):1507:crawl]
_GMaster: starting history crawl turns=1 stime=(1535701378, 0)
entry_stime=(1535701378, 0) etime=1535959669
[2018-09-03 07:27:50.179210] I [master(worker /data/gluster-dist/distvol):1536:crawl]
_GMaster: slave's time stime=(1535701378, 0)
[2018-09-03 07:27:51.300096] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:51.399027] I [gsyncd(status):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-
replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:52.510271] I [master(worker /data/gluster-dist/distvol):1944:syncjob]
Syncer: Sync Time Taken duration=1.6146 num_files=1 job=2
return_code=0
[2018-09-03 07:27:52.514487] I [master(worker /data/gluster-dist/distvol):1374:process]
_GMaster: Entry Time Taken MKD=0 MKN=0 LIN=0 SYM=0 REN=1
RMD=0 CRE=0 duration=0.2745 UNL=0
[2018-09-03 07:27:52.514615] I [master(worker /data/gluster-dist/distvol):1384:process]
_GMaster: Data/Metadata Time Taken SETA=1 SETX=0
meta_duration=0.2691 data_duration=1.7883 DATA=1 XATT=0
[2018-09-03 07:27:52.514844] I [master(worker /data/gluster-dist/distvol):1394:process]
_GMaster: Batch Completed changelog_end=1535701379entry_stime=(1535701378,
0) changelog_start=1535701379 stime=(1535701378, 0)
duration=2.3353 num_changelogs=1 mode=history_changelog
[2018-09-03 07:27:52.515224] I [master(worker /data/gluster-dist/distvol):1552:crawl]
_GMaster: finished history crawl endtime=1535959662
stime=(1535701378, 0) entry_stime=(1535701378, 0)
[2018-09-03 07:28:01.706876] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:28:01.803858] I [gsyncd(status):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-
replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:28:03.521949] I [master(worker /data/gluster-dist/distvol):1507:crawl]
_GMaster: starting history crawl turns=2 stime=(1535701378, 0)
entry_stime=(1535701378, 0) etime=1535959683
[2018-09-03 07:28:03.523086] I [master(worker /data/gluster-dist/distvol):1552:crawl]
_GMaster: finished history crawl endtime=1535959677
stime=(1535701378, 0) entry_stime=(1535701378, 0)
[2018-09-03 07:28:04.62274] I [gsyncdstatus(worker
Crawl Status Change status=Changelog Crawl
*Sent:* Monday, September 3, 2018 12:44 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
The log is not complete. If you are re-trying, could you please try it out
on 4.1.3 and share the logs.
Thanks,
Kotresh HR
Hi Kotresh,
Please find the log files attached.
Request you to please have a look.
/Krishna
*Sent:* Monday, September 3, 2018 10:19 AM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
Indexing is the feature used by Hybrid crawl which only makes crawl
faster. It has nothing to do with missing data sync.
Could you please share the complete log file of the session where the
issue is encountered ?
Thanks,
Kotresh HR
Hi Kotresh/Support,
Request your help to get it fix. My slave is not getting sync with master.
When I restart the session after doing the indexing off then only it shows
the file at slave but that is also blank with zero size.
At master: file size is 5.8 GB.
17020_GPLV3.tar.gz
5.8G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
But at slave, after doing the “indexing off” and restart the session and
then wait for 2 days. It shows only 4.9 GB copied.
17020_GPLV3.tar.gz
4.9G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
Similarly, I tested for small file of size 1.2 GB only that is still
showing “0” size at slave after days waiting time.
1.2G rflowTestInt18.08-b001.t.Z
0 rflowTestInt18.08-b001.t.Z
Volume Name: glusterdist
Type: Distribute
Volume ID: af5b2915-7170-4b5e-aee8-7e68757b9bf1
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Brick1: gluster-poc-noida:/data/gluster-dist/distvol
Brick2: noi-poc-gluster:/data/gluster-dist/distvol
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
transport.address-family: inet
nfs.disable: on
Please help to fix, I believe its not a normal behavior of gluster rsync.
/Krishna
*From:* Krishna Verma
*Sent:* Friday, August 31, 2018 12:42 PM
*Subject:* RE: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
Hi Kotresh,
I have tested the geo replication over distributed volumes with 2*2 gluster setup.
gluster-poc-sj::glusterdist status
MASTER NODE MASTER VOL MASTER BRICK SLAVE
USER SLAVE SLAVE NODE STATUS CRAWL
STATUS LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
---------------------------------------------------------
gluster-poc-noida glusterdist /data/gluster-dist/distvol
root gluster-poc-sj::glusterdist gluster-poc-sj Active
Changelog Crawl 2018-08-31 10:28:19
noi-poc-gluster glusterdist /data/gluster-dist/distvol
root gluster-poc-sj::glusterdist gluster-poc-sj2 Active
History Crawl N/A
Not at client I copied a 848MB file from local disk to master mounted
volume and it took only 1 minute and 15 seconds. Its great
.
But even after waited for 2 hrs I was unable to see that file at slave
site. Then I again erased the indexing by doing “gluster volume set
glusterdist indexing off” and restart the session. Magically I received
the file instantly at slave after doing this.
Why I need to do “indexing off” every time to reflect data at slave site?
Is there any fix/workaround of it?
/Krishna
*Sent:* Friday, August 31, 2018 10:10 AM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Kotresh,
Yes, this include the time take to write 1GB file to master. geo-rep was
not stopped while the data was copying to master.
This way, you can't really measure how much time geo-rep took.
But now I am trouble, My putty session was timed out while copying data to
master and geo replication was active. After I restart putty session My
Master data is not syncing with slave. Its Last_synced time is 1hrs behind
the current time.
I restart the geo rep and also delete and again create the session but its
“LAST_SYNCED” time is same.
Unless, geo-rep is Faulty, it would be processing/syncing. You should
check logs for any errors.
Please help in this.

. It's better if gluster volume has more distribute count like 3*3 or
4*3 :- Are you refereeing to create a distributed volume with 3 master
node and 3 slave node?
Yes, that's correct. Please do the test with this. I recommend you to run
the actual workload for which you are planning to use gluster instead of
copying 1GB file and testing.
/krishna
*Sent:* Thursday, August 30, 2018 3:20 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Kotresh,
After fix the library link on node "noi-poc-gluster ", the status of one
mater node is “Active” and another is “Passive”. Can I setup both the
master as “Active” ?
Nope, since it's replica, it's redundant to sync same files from two
nodes. Both replicas can't be Active.
Also, when I copy a 1GB size of file from gluster client to master gluster
volume which is replicated with the slave volume, it tooks 35 minutes and
49 seconds. Is there any way to reduce its time taken to rsync data.
How did you measure this time? Does this include the time take for you to
write 1GB file to master?
There are two aspects to consider while measuring this.
1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.
In your case, since the setup is 1*2 and only one geo-rep worker is
Active, Step2 above equals to time for step1 + network transfer time.
You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master. It's one way.
2. Or stop geo-rep until the 1GB file is written to master and then start
geo-rep to get actual geo-rep time.
To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like 3*3 or 4*3
It will help in two ways.
1. The files gets distributed on master to multiple bricks
2. So above will help geo-rep as files on multiple bricks are
synced in parallel (multiple Actives)
NOTE: Gluster master server and one client is in Noida, India Location.
Gluster Slave server and one client is in USA.
Our approach is to transfer data from Noida gluster client will reach to
the USA gluster client in a minimum time. Please suggest the best approach
to achieve it.
Thu Aug 30 12:26:26 IST 2018
sending incremental file list
gentoo_root.img
1.07G 100% 490.70kB/s 0:35:36 (xfr#1, to-chk=0/1)
Is this I/O time to write to master volume?
sent 1.07G bytes received 35 bytes 499.65K bytes/sec
total size is 1.07G speedup is 1.00
Thu Aug 30 13:02:15 IST 2018
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL
STATUS LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
---------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
ssh://gluster-poc-sj::glusterep gluster-poc-sj Active Changelog
Crawl 2018-08-30 13:42:18
noi-poc-gluster glusterep /data/gluster/gv0 root
ssh://gluster-poc-sj::glusterep gluster-poc-sj Passive
N/A N/A
Thanks in advance for your all time support.
/Krishna
*Sent:* Thursday, August 30, 2018 10:51 AM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.
-Kotresh HR
Hi Kotresh,
Thank you so much for you input. Geo-replication is now showing “Active”
atleast for 1 master node. But its still at faulty state for the 2nd
master server.
Below is the detail.
glusterep gluster-poc-sj::glusterep status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
--------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl
2018-08-29 23:56:06
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A
N/A
Status of volume: glusterep
Gluster process TCP Port RDMA Port Online
Pid
------------------------------------------------------------
------------------
Brick gluster-poc-noida:/data/gluster/gv0 49152 0 Y
22463
Brick noi-poc-gluster:/data/gluster/gv0 49152 0 Y
19471
Self-heal Daemon on localhost N/A N/A Y
32087
Self-heal Daemon on noi-poc-gluster N/A N/A Y
6272
Task Status of Volume glusterep
------------------------------------------------------------
------------------
There are no active volume tasks
Volume Name: glusterep
Type: Replicate
Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: gluster-poc-noida:/data/gluster/gv0
Brick2: noi-poc-gluster:/data/gluster/gv0
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
Could you please help me in that also please?
It would be really a great help from your side.
/Krishna
*Sent:* Wednesday, August 29, 2018 10:47 AM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Answer inline
Hi Kotresh,
I created the links before. Below is the detail.
lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so ->
/usr/lib64/libgfchangelog.so.1
The link created is pointing to wrong library. Please fix this
#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so
lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 ->
libgfchangelog.so.0.0.1
-rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
Is it looks good what we exactly need or di I need to create any more link
or How to get “libgfchangelog.so” file if missing.
/Krishna
*Sent:* Tuesday, August 28, 2018 4:22 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which
is what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is
present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"
It should be something like below.
#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la
<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so -> libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1
Hi Kotresh,
Thanks for the response, I did that also but nothing changed.
libgfchangelog.so.0 (libc6,x86-64) =>
/usr/lib64/libgfchangelog.so.0
gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
-----------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty
N/A N/A
/Krishna
*Sent:* Tuesday, August 28, 2018 4:00 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for
geo-replication. But again same errors in the logs.
Post by Krishna Verma
I have attaching the config files and logs here.
gluster-poc-sj::glusterep stop
Post by Krishna Verma
Stopping geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep delete
Post by Krishna Verma
Deleting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep create push-pem force
Post by Krishna Verma
Creating geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep start
Post by Krishna Verma
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
gluster-poc-sj::glusterep start
Post by Krishna Verma
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
syncdaemon/repce.py
gluster-poc-sj::glusterep start
Post by Krishna Verma
Starting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
Post by Krishna Verma
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
Post by Krishna Verma
------------------------------------------------------------
------------------------------------------------------------
-----------------------------
Post by Krishna Verma
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
Post by Krishna Verma
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
Post by Krishna Verma
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
Post by Krishna Verma
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.
gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=
aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_
6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=
fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not
can you please apply that.
Post by Krishna Verma
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.
Post by Krishna Verma
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the
same error.
Post by Krishna Verma
Post by Krishna Verma
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker
/data/gluster/gv0):197:__call__] RepceClient: call failed
call=26469:139794524604224:1535440781.44 method=init
error=OSError
Post by Krishna Verma
Post by Krishna Verma
Post by Krishna Verma
[2018-08-28 07:19:41.447041] E [syncdutils(worker
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent
/data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching
EOF.
Post by Krishna Verma
Post by Krishna Verma
Post by Krishna Verma
[2018-08-28 07:19:42.440184] I [monitor(monitor):272:monitor]
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
Krishna Verma
2018-09-03 11:11:01 UTC
Permalink
Hi Kotesh:

Gluster Master Site Servers : gluster-poc-noida and noi-poc-gluster
Gluster Slave site servers: gluster-poc-sj and gluster-poc-sj2

Master Client : noi-foreman02
Slave Client: sj-kverma

Step1: Create a LVM partition of 10 GB on all 4 Gluster nodes (2 Master) * (2 slave) and format that in ext4 filesystem and mount that on server.

[***@gluster-poc-noida distvol]# df -hT /data/gluster-dist
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-gluster--vol--dist ext4 9.8G 847M 8.4G 9% /data/gluster-dist
[***@gluster-poc-noida distvol]#

Step 2: Created a Trusted storage pool as below:

At Master:
[***@gluster-poc-noida distvol]# gluster peer status
Number of Peers: 1

Hostname: noi-poc-gluster
Uuid: 01316459-b5c8-461d-ad25-acc17a82e78f
State: Peer in Cluster (Connected)
[***@gluster-poc-noida distvol]#
At Slave:
[***@gluster-poc-sj ~]# gluster peer status
Number of Peers: 1

Hostname: gluster-poc-sj2
Uuid: 6ba85bfe-cd74-4a76-a623-db687f7136fa
State: Peer in Cluster (Connected)
[***@gluster-poc-sj ~]#

Step 3: Created distributed volume as below:

At Master: “gluster volume create glusterdist gluster-poc-noida:/data/gluster-dist/distvol noi-poc-gluster:/data/gluster-dist/distvol”

[***@gluster-poc-noida distvol]# gluster volume info glusterdist

Volume Name: glusterdist
Type: Distribute
Volume ID: af5b2915-7170-4b5e-aee8-7e68757b9bf1
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster-dist/distvol
Brick2: noi-poc-gluster:/data/gluster-dist/distvol
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
transport.address-family: inet
nfs.disable: on
[***@gluster-poc-noida distvol]#

At Slave “ gluster volume create glusterdist gluster-poc-sj:/data/gluster-dist/distvol gluster-poc-sj2:/data/gluster-dist/distvol”

Volume Name: glusterdist
Type: Distribute
Volume ID: a982da53-a3d7-4b5a-be77-df85f584610d
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-sj:/data/gluster-dist/distvol
Brick2: gluster-poc-sj2:/data/gluster-dist/distvol
Options Reconfigured:
transport.address-family: inet
nfs.disable: on

Step 4 : Gluster Geo Replication configuration

On all Gluster node: “yum install glusterfs-geo-replication.x86_64”
On master node where I created session:
ssh-keygen
ssh-copy-id ***@gluster-poc-sj
cp /root/.ssh/id_rsa.pub /var/lib/glusterd/geo-replication/secret.pem.pub
scp /var/lib/glusterd/geo-replication/secret.pem* ***@gluster-poc-sj:/var/lib/glusterd/geo-replication/

On Slave Node:

ln -s /usr/libexec/glusterfs/gsyncd /nonexistent/gsyncd

On Master Node:

gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist create push-pem force
gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist start

[***@gluster-poc-noida distvol]# gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj Active Changelog Crawl 2018-08-31 13:12:58
noi-poc-gluster glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj2 Active History Crawl N/A
[***@gluster-poc-noida distvol]#

On Gluster Client Node at master Site:

yum install -y glusterfs-client
mkdir /distvol
mount -t glusterfs gluster-poc-noida:/glusterdist /distvol
[***@noi-foreman02 ~]# df -hT /distvol
Filesystem Type Size Used Avail Use% Mounted on
gluster-poc-noida:/glusterdist fuse.glusterfs 20G 9.6G 9.1G 52% /distvol
[***@noi-foreman02 ~]#

On Gluster Client at Slave site:
yum install -y glusterfs-client
mkdir /distvol
mount -t glusterfs gluster-poc-sj:/glusterdist /distvol

Now To Test the Geo Replication Setup:

I have copied below file from client at Master site:
[***@noi-foreman02 distvol]# du -sh 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
5.8G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[***@noi-foreman02 distvol]#

But from last three days it synced to the slave only 5.4GB
[***@sj-kverma distvol]# du -sh 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
5.4G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[***@sj-kverma distvol]#

I have also tested a another file of size 1 GB only copied from master client and that is still shows 0 size at slave client after 3 days.

/Krishna



From: Kotresh Hiremath Ravishankar <***@redhat.com>
Sent: Monday, September 3, 2018 3:17 PM
To: Krishna Verma <***@cadence.com>
Cc: Sunny Kumar <***@redhat.com>; Gluster Users <gluster-***@gluster.org>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi krishna,
I see no error in the shared logs. The only errro messages I see are during geo-rep stop. That is expected.
Could you share the steps you used to created geo-rep setup?
Thanks,
Kotresh HR

On Mon, Sep 3, 2018 at 1:02 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotesh,

Below is the cat output of gsyncd.log file generating on my master server.

And I am using 4.1.3 version only all my gluster nodes.
[***@gluster-poc-noida distvol]# gluster --version | grep glusterfs
glusterfs 4.1.3


[***@gluster-poc-noida distvol]# cat /var/log/glusterfs/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.log
[2018-09-03 04:01:52.424609] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 04:01:52.526323] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:55:41.326411] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:55:49.676120] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:55:50.406042] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:56:52.847537] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:57:03.778448] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:57:25.86958] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:57:25.855273] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:58:09.294239] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:59:39.255487] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:59:39.355753] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:00:26.311767] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:03:29.205226] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:03:30.131258] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:10:34.679677] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:10:35.653928] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:26:24.438854] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:26:25.495117] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.159113] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.216475] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.932451] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.988286] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.992789] E [syncdutils(worker /data/gluster-dist/distvol):305:log_raise_exception] <top>: connection to peer is broken
[2018-09-03 07:27:26.994750] E [syncdutils(worker /data/gluster-dist/distvol):801:errlog] Popen: command returned error cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-X8iHv1/86bcbaf188167a3859c3267081671312.sock gluster-poc-sj /nonexistent/gsyncd slave glusterdist gluster-poc-sj::glusterdist --master-node gluster-poc-noida --master-node-id 098c16c6-8dff-490a-a2e8-c8cb328fcbb3 --master-brick /data/gluster-dist/distvol --local-node gluster-poc-sj --local-node-id e54f2759-4c56-40dd-89e1-e10c3037d48b --slave-timeout 120 --slave-log-level INFO --slave-gluster-log-level INFO --slave-gluster-command-dir /usr/local/sbin/ error=255
[2018-09-03 07:27:26.994971] E [syncdutils(worker /data/gluster-dist/distvol):805:logerr] Popen: ssh> Killed by signal 15.
[2018-09-03 07:27:27.7174] I [repce(agent /data/gluster-dist/distvol):80:service_loop] RepceServer: terminating on reaching EOF.
[2018-09-03 07:27:27.15156] I [gsyncdstatus(monitor):244:set_worker_status] GeorepStatus: Worker Status Change status=Faulty
[2018-09-03 07:27:28.52725] I [gsyncd(monitor-status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:28.64521] I [subcmds(monitor-status):19:subcmd_monitor_status] <top>: Monitor Status Change status=Stopped
[2018-09-03 07:27:35.345937] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:35.444247] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:36.181122] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:36.281459] I [gsyncd(monitor):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:39.782480] I [gsyncdstatus(monitor):244:set_worker_status] GeorepStatus: Worker Status Change status=Initializing...
[2018-09-03 07:27:40.321157] I [monitor(monitor):158:monitor] Monitor: starting gsyncd worker brick=/data/gluster-dist/distvol slave_node=gluster-poc-sj
[2018-09-03 07:27:40.376172] I [gsyncd(agent /data/gluster-dist/distvol):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:40.377144] I [changelogagent(agent /data/gluster-dist/distvol):72:__init__] ChangelogAgent: Agent listining...
[2018-09-03 07:27:40.378150] I [gsyncd(worker /data/gluster-dist/distvol):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:40.391185] I [resource(worker /data/gluster-dist/distvol):1377:connect_remote] SSH: Initializing SSH connection between master and slave...
[2018-09-03 07:27:43.752819] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:43.848619] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:45.365627] I [resource(worker /data/gluster-dist/distvol):1424:connect_remote] SSH: SSH connection between master and slave established. duration=4.9743
[2018-09-03 07:27:45.365866] I [resource(worker /data/gluster-dist/distvol):1096:connect] GLUSTER: Mounting gluster volume locally...
[2018-09-03 07:27:46.388974] I [resource(worker /data/gluster-dist/distvol):1119:connect] GLUSTER: Mounted gluster volume duration=1.0230
[2018-09-03 07:27:46.389206] I [subcmds(worker /data/gluster-dist/distvol):70:subcmd_worker] <top>: Worker spawn successful. Acknowledging back to monitor
[2018-09-03 07:27:48.401196] I [master(worker /data/gluster-dist/distvol):1593:register] _GMaster: Working dir path=/var/lib/misc/gluster/gsyncd/glusterdist_gluster-poc-sj_glusterdist/data-gluster-dist-distvol
[2018-09-03 07:27:48.401477] I [resource(worker /data/gluster-dist/distvol):1282:service_loop] GLUSTER: Register time time=1535959668
[2018-09-03 07:27:49.176095] I [gsyncdstatus(worker /data/gluster-dist/distvol):277:set_active] GeorepStatus: Worker Status Change status=Active
[2018-09-03 07:27:49.177079] I [gsyncdstatus(worker /data/gluster-dist/distvol):249:set_worker_crawl_status] GeorepStatus: Crawl Status Change status=History Crawl
[2018-09-03 07:27:49.177339] I [master(worker /data/gluster-dist/distvol):1507:crawl] _GMaster: starting history crawl turns=1 stime=(1535701378, 0) entry_stime=(1535701378, 0) etime=1535959669
[2018-09-03 07:27:50.179210] I [master(worker /data/gluster-dist/distvol):1536:crawl] _GMaster: slave's time stime=(1535701378, 0)
[2018-09-03 07:27:51.300096] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:51.399027] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:52.510271] I [master(worker /data/gluster-dist/distvol):1944:syncjob] Syncer: Sync Time Taken duration=1.6146 num_files=1 job=2 return_code=0
[2018-09-03 07:27:52.514487] I [master(worker /data/gluster-dist/distvol):1374:process] _GMaster: Entry Time Taken MKD=0 MKN=0 LIN=0 SYM=0 REN=1 RMD=0 CRE=0 duration=0.2745 UNL=0
[2018-09-03 07:27:52.514615] I [master(worker /data/gluster-dist/distvol):1384:process] _GMaster: Data/Metadata Time Taken SETA=1 SETX=0 meta_duration=0.2691 data_duration=1.7883 DATA=1 XATT=0
[2018-09-03 07:27:52.514844] I [master(worker /data/gluster-dist/distvol):1394:process] _GMaster: Batch Completed changelog_end=1535701379entry_stime=(1535701378, 0) changelog_start=1535701379 stime=(1535701378, 0) duration=2.3353 num_changelogs=1 mode=history_changelog
[2018-09-03 07:27:52.515224] I [master(worker /data/gluster-dist/distvol):1552:crawl] _GMaster: finished history crawl endtime=1535959662 stime=(1535701378, 0) entry_stime=(1535701378, 0)
[2018-09-03 07:28:01.706876] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:28:01.803858] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:28:03.521949] I [master(worker /data/gluster-dist/distvol):1507:crawl] _GMaster: starting history crawl turns=2 stime=(1535701378, 0) entry_stime=(1535701378, 0) etime=1535959683
[2018-09-03 07:28:03.523086] I [master(worker /data/gluster-dist/distvol):1552:crawl] _GMaster: finished history crawl endtime=1535959677 stime=(1535701378, 0) entry_stime=(1535701378, 0)
[2018-09-03 07:28:04.62274] I [gsyncdstatus(worker /data/gluster-dist/distvol):249:set_worker_crawl_status] GeorepStatus: Crawl Status Change status=Changelog Crawl
[***@gluster-poc-noida distvol]#

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Monday, September 3, 2018 12:44 PM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
The log is not complete. If you are re-trying, could you please try it out on 4.1.3 and share the logs.
Thanks,
Kotresh HR

On Mon, Sep 3, 2018 at 12:42 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Please find the log files attached.

Request you to please have a look.

/Krishna



From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Monday, September 3, 2018 10:19 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
Indexing is the feature used by Hybrid crawl which only makes crawl faster. It has nothing to do with missing data sync.
Could you please share the complete log file of the session where the issue is encountered ?
Thanks,
Kotresh HR

On Mon, Sep 3, 2018 at 9:33 AM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh/Support,

Request your help to get it fix. My slave is not getting sync with master. When I restart the session after doing the indexing off then only it shows the file at slave but that is also blank with zero size.

At master: file size is 5.8 GB.

[***@gluster-poc-noida distvol]# du -sh 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
5.8G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[***@gluster-poc-noida distvol]#

But at slave, after doing the “indexing off” and restart the session and then wait for 2 days. It shows only 4.9 GB copied.

[***@gluster-poc-sj distvol]# du -sh 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
4.9G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[***@gluster-poc-sj distvol]#

Similarly, I tested for small file of size 1.2 GB only that is still showing “0” size at slave after days waiting time.

At Master:

[***@gluster-poc-noida distvol]# du -sh rflowTestInt18.08-b001.t.Z
1.2G rflowTestInt18.08-b001.t.Z
[***@gluster-poc-noida distvol]#

At Slave:

[***@gluster-poc-sj distvol]# du -sh rflowTestInt18.08-b001.t.Z
0 rflowTestInt18.08-b001.t.Z
[***@gluster-poc-sj distvol]#

Below is my distributed volume info :

[***@gluster-poc-noida distvol]# gluster volume info glusterdist

Volume Name: glusterdist
Type: Distribute
Volume ID: af5b2915-7170-4b5e-aee8-7e68757b9bf1
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster-dist/distvol
Brick2: noi-poc-gluster:/data/gluster-dist/distvol
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
transport.address-family: inet
nfs.disable: on
[***@gluster-poc-noida distvol]#

Please help to fix, I believe its not a normal behavior of gluster rsync.

/Krishna
From: Krishna Verma
Sent: Friday, August 31, 2018 12:42 PM
To: 'Kotresh Hiremath Ravishankar' <***@redhat.com<mailto:***@redhat.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: RE: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

Hi Kotresh,

I have tested the geo replication over distributed volumes with 2*2 gluster setup.

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj Active Changelog Crawl 2018-08-31 10:28:19
noi-poc-gluster glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj2 Active History Crawl N/A
[***@gluster-poc-noida ~]#

Not at client I copied a 848MB file from local disk to master mounted volume and it took only 1 minute and 15 seconds. Its great
.

But even after waited for 2 hrs I was unable to see that file at slave site. Then I again erased the indexing by doing “gluster volume set glusterdist indexing off” and restart the session. Magically I received the file instantly at slave after doing this.

Why I need to do “indexing off” every time to reflect data at slave site? Is there any fix/workaround of it?

/Krishna


From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Friday, August 31, 2018 10:10 AM
To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 3:51 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Yes, this include the time take to write 1GB file to master. geo-rep was not stopped while the data was copying to master.

This way, you can't really measure how much time geo-rep took.


But now I am trouble, My putty session was timed out while copying data to master and geo replication was active. After I restart putty session My Master data is not syncing with slave. Its Last_synced time is 1hrs behind the current time.

I restart the geo rep and also delete and again create the session but its “LAST_SYNCED” time is same.

Unless, geo-rep is Faulty, it would be processing/syncing. You should check logs for any errors.


Please help in this.


. It's better if gluster volume has more distribute count like 3*3 or 4*3 :- Are you refereeing to create a distributed volume with 3 master node and 3 slave node?

Yes, that's correct. Please do the test with this. I recommend you to run the actual workload for which you are planning to use gluster instead of copying 1GB file and testing.



/krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Thursday, August 30, 2018 3:20 PM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 1:52 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

After fix the library link on node "noi-poc-gluster ", the status of one mater node is “Active” and another is “Passive”. Can I setup both the master as “Active” ?

Nope, since it's replica, it's redundant to sync same files from two nodes. Both replicas can't be Active.


Also, when I copy a 1GB size of file from gluster client to master gluster volume which is replicated with the slave volume, it tooks 35 minutes and 49 seconds. Is there any way to reduce its time taken to rsync data.

How did you measure this time? Does this include the time take for you to write 1GB file to master?
There are two aspects to consider while measuring this.

1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.

In your case, since the setup is 1*2 and only one geo-rep worker is Active, Step2 above equals to time for step1 + network transfer time.

You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master. It's one way.
2. Or stop geo-rep until the 1GB file is written to master and then start geo-rep to get actual geo-rep time.

To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like 3*3 or 4*3
It will help in two ways.
1. The files gets distributed on master to multiple bricks
2. So above will help geo-rep as files on multiple bricks are synced in parallel (multiple Actives)

NOTE: Gluster master server and one client is in Noida, India Location.
Gluster Slave server and one client is in USA.

Our approach is to transfer data from Noida gluster client will reach to the USA gluster client in a minimum time. Please suggest the best approach to achieve it.

[***@noi-dcops ~]# date ; rsync -avh --progress /tmp/gentoo_root.img /glusterfs/ ; date
Thu Aug 30 12:26:26 IST 2018
sending incremental file list
gentoo_root.img
1.07G 100% 490.70kB/s 0:35:36 (xfr#1, to-chk=0/1)

Is this I/O time to write to master volume?

sent 1.07G bytes received 35 bytes 499.65K bytes/sec
total size is 1.07G speedup is 1.00
Thu Aug 30 13:02:15 IST 2018
[***@noi-dcops ~]#



[***@gluster-poc-noida gluster]# gluster volume geo-replication status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root ssh://gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-30 13:42:18
noi-poc-gluster glusterep /data/gluster/gv0 root ssh://gluster-poc-sj::glusterep gluster-poc-sj Passive N/A N/A
[***@gluster-poc-noida gluster]#

Thanks in advance for your all time support.

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Thursday, August 30, 2018 10:51 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.
-Kotresh HR

On Thu, Aug 30, 2018 at 12:17 AM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Thank you so much for you input. Geo-replication is now showing “Active” atleast for 1 master node. But its still at faulty state for the 2nd master server.

Below is the detail.

[***@gluster-poc-noida glusterfs]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-29 23:56:06
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A


[***@gluster-poc-noida glusterfs]# gluster volume status
Status of volume: glusterep
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster-poc-noida:/data/gluster/gv0 49152 0 Y 22463
Brick noi-poc-gluster:/data/gluster/gv0 49152 0 Y 19471
Self-heal Daemon on localhost N/A N/A Y 32087
Self-heal Daemon on noi-poc-gluster N/A N/A Y 6272

Task Status of Volume glusterep
------------------------------------------------------------------------------
There are no active volume tasks



[***@gluster-poc-noida glusterfs]# gluster volume info

Volume Name: glusterep
Type: Replicate
Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster/gv0
Brick2: noi-poc-gluster:/data/gluster/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
[***@gluster-poc-noida glusterfs]#

Could you please help me in that also please?

It would be really a great help from your side.

/Krishna
From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Wednesday, August 29, 2018 10:47 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Answer inline

On Tue, Aug 28, 2018 at 4:28 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

I created the links before. Below is the detail.

[***@gluster-poc-noida ~]# ls -l /usr/lib64 | grep libgfch
lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1

The link created is pointing to wrong library. Please fix this

#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so

lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]# locate libgfchangelog.so
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]#

Is it looks good what we exactly need or di I need to create any more link or How to get “libgfchangelog.so” file if missing.

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:22 PM
To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which is what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"

It should be something like below.

#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so -> libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1

On Tue, Aug 28, 2018 at 4:04 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Thanks for the response, I did that also but nothing changed.

[***@gluster-poc-noida ~]# ldconfig /usr/lib64
[***@gluster-poc-noida ~]# ldconfig -p | grep libgfchangelog
libgfchangelog.so.0 (libc6,x86-64) => /usr/lib64/libgfchangelog.so.0
[***@gluster-poc-noida ~]#

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
[***@gluster-poc-noida ~]#

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:00 PM
To: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>
Cc: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog


On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar <***@redhat.com<mailto:***@redhat.com>> wrote:
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for geo-replication. But again same errors in the logs.
I have attaching the config files and logs here.
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Deleting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Creating geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not can you please apply that.
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0 /usr/lib64/libgfchangelog.so.
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the same error.
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] RepceClient: call failed call=26469:139794524604224:1535440781.44 method=init error=OSError
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
Gluster-***@gluster.org<mailto:Gluster-***@gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-users<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
Krishna Verma
2018-09-06 03:38:48 UTC
Permalink
Hi Kotresh,

Did you get a chance to look into this?

For replicated gluster volume, Still Master is not getting sync with slave.

At Master :
[***@gluster-poc-noida ~]# du -sh /repvol/rflowTestInt18.08-b001.t.Z
1.2G /repvol/rflowTestInt18.08-b001.t.Z
[***@gluster-poc-noida ~]#

At Slave:
[***@gluster-poc-sj ~]# du -sh /repvol/rflowTestInt18.08-b001.t.Z
du: cannot access ‘/repvol/rflowTestInt18.08-b001.t.Z’: No such file or directory
[***@gluster-poc-sj ~]#

File not reached at slave.

/Krishna

From: Krishna Verma
Sent: Monday, September 3, 2018 4:41 PM
To: 'Kotresh Hiremath Ravishankar' <***@redhat.com>
Cc: Sunny Kumar <***@redhat.com>; Gluster Users <gluster-***@gluster.org>
Subject: RE: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

Hi Kotesh:

Gluster Master Site Servers : gluster-poc-noida and noi-poc-gluster
Gluster Slave site servers: gluster-poc-sj and gluster-poc-sj2

Master Client : noi-foreman02
Slave Client: sj-kverma

Step1: Create a LVM partition of 10 GB on all 4 Gluster nodes (2 Master) * (2 slave) and format that in ext4 filesystem and mount that on server.

[***@gluster-poc-noida distvol]# df -hT /data/gluster-dist
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-gluster--vol--dist ext4 9.8G 847M 8.4G 9% /data/gluster-dist
[***@gluster-poc-noida distvol]#

Step 2: Created a Trusted storage pool as below:

At Master:
[***@gluster-poc-noida distvol]# gluster peer status
Number of Peers: 1

Hostname: noi-poc-gluster
Uuid: 01316459-b5c8-461d-ad25-acc17a82e78f
State: Peer in Cluster (Connected)
[***@gluster-poc-noida distvol]#

At Slave:
[***@gluster-poc-sj ~]# gluster peer status
Number of Peers: 1

Hostname: gluster-poc-sj2
Uuid: 6ba85bfe-cd74-4a76-a623-db687f7136fa
State: Peer in Cluster (Connected)
[***@gluster-poc-sj ~]#

Step 3: Created distributed volume as below:

At Master: “gluster volume create glusterdist gluster-poc-noida:/data/gluster-dist/distvol noi-poc-gluster:/data/gluster-dist/distvol”

[***@gluster-poc-noida distvol]# gluster volume info glusterdist

Volume Name: glusterdist
Type: Distribute
Volume ID: af5b2915-7170-4b5e-aee8-7e68757b9bf1
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster-dist/distvol
Brick2: noi-poc-gluster:/data/gluster-dist/distvol
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
transport.address-family: inet
nfs.disable: on
[***@gluster-poc-noida distvol]#

At Slave “ gluster volume create glusterdist gluster-poc-sj:/data/gluster-dist/distvol gluster-poc-sj2:/data/gluster-dist/distvol”

Volume Name: glusterdist
Type: Distribute
Volume ID: a982da53-a3d7-4b5a-be77-df85f584610d
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-sj:/data/gluster-dist/distvol
Brick2: gluster-poc-sj2:/data/gluster-dist/distvol
Options Reconfigured:
transport.address-family: inet
nfs.disable: on

Step 4 : Gluster Geo Replication configuration

On all Gluster node: “yum install glusterfs-geo-replication.x86_64”
On master node where I created session:
ssh-keygen
ssh-copy-id ***@gluster-poc-sj
cp /root/.ssh/id_rsa.pub /var/lib/glusterd/geo-replication/secret.pem.pub
scp /var/lib/glusterd/geo-replication/secret.pem* ***@gluster-poc-sj:/var/lib/glusterd/geo-replication/

On Slave Node:

ln -s /usr/libexec/glusterfs/gsyncd /nonexistent/gsyncd

On Master Node:

gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist create push-pem force
gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist start

[***@gluster-poc-noida distvol]# gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj Active Changelog Crawl 2018-08-31 13:12:58
noi-poc-gluster glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj2 Active History Crawl N/A
[***@gluster-poc-noida distvol]#

On Gluster Client Node at master Site:

yum install -y glusterfs-client
mkdir /distvol
mount -t glusterfs gluster-poc-noida:/glusterdist /distvol
[***@noi-foreman02 ~]# df -hT /distvol
Filesystem Type Size Used Avail Use% Mounted on
gluster-poc-noida:/glusterdist fuse.glusterfs 20G 9.6G 9.1G 52% /distvol
[***@noi-foreman02 ~]#

On Gluster Client at Slave site:
yum install -y glusterfs-client
mkdir /distvol
mount -t glusterfs gluster-poc-sj:/glusterdist /distvol

Now To Test the Geo Replication Setup:

I have copied below file from client at Master site:
[***@noi-foreman02 distvol]# du -sh 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
5.8G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[***@noi-foreman02 distvol]#

But from last three days it synced to the slave only 5.4GB
[***@sj-kverma distvol]# du -sh 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
5.4G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[***@sj-kverma distvol]#

I have also tested a another file of size 1 GB only copied from master client and that is still shows 0 size at slave client after 3 days.

/Krishna



From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Monday, September 3, 2018 3:17 PM
To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi krishna,
I see no error in the shared logs. The only errro messages I see are during geo-rep stop. That is expected.
Could you share the steps you used to created geo-rep setup?
Thanks,
Kotresh HR

On Mon, Sep 3, 2018 at 1:02 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotesh,

Below is the cat output of gsyncd.log file generating on my master server.

And I am using 4.1.3 version only all my gluster nodes.
[***@gluster-poc-noida distvol]# gluster --version | grep glusterfs
glusterfs 4.1.3


[***@gluster-poc-noida distvol]# cat /var/log/glusterfs/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.log
[2018-09-03 04:01:52.424609] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 04:01:52.526323] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:55:41.326411] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:55:49.676120] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:55:50.406042] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:56:52.847537] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:57:03.778448] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:57:25.86958] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:57:25.855273] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:58:09.294239] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:59:39.255487] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:59:39.355753] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:00:26.311767] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:03:29.205226] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:03:30.131258] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:10:34.679677] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:10:35.653928] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:26:24.438854] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:26:25.495117] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.159113] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.216475] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.932451] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.988286] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.992789] E [syncdutils(worker /data/gluster-dist/distvol):305:log_raise_exception] <top>: connection to peer is broken
[2018-09-03 07:27:26.994750] E [syncdutils(worker /data/gluster-dist/distvol):801:errlog] Popen: command returned error cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-X8iHv1/86bcbaf188167a3859c3267081671312.sock gluster-poc-sj /nonexistent/gsyncd slave glusterdist gluster-poc-sj::glusterdist --master-node gluster-poc-noida --master-node-id 098c16c6-8dff-490a-a2e8-c8cb328fcbb3 --master-brick /data/gluster-dist/distvol --local-node gluster-poc-sj --local-node-id e54f2759-4c56-40dd-89e1-e10c3037d48b --slave-timeout 120 --slave-log-level INFO --slave-gluster-log-level INFO --slave-gluster-command-dir /usr/local/sbin/ error=255
[2018-09-03 07:27:26.994971] E [syncdutils(worker /data/gluster-dist/distvol):805:logerr] Popen: ssh> Killed by signal 15.
[2018-09-03 07:27:27.7174] I [repce(agent /data/gluster-dist/distvol):80:service_loop] RepceServer: terminating on reaching EOF.
[2018-09-03 07:27:27.15156] I [gsyncdstatus(monitor):244:set_worker_status] GeorepStatus: Worker Status Change status=Faulty
[2018-09-03 07:27:28.52725] I [gsyncd(monitor-status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:28.64521] I [subcmds(monitor-status):19:subcmd_monitor_status] <top>: Monitor Status Change status=Stopped
[2018-09-03 07:27:35.345937] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:35.444247] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:36.181122] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:36.281459] I [gsyncd(monitor):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:39.782480] I [gsyncdstatus(monitor):244:set_worker_status] GeorepStatus: Worker Status Change status=Initializing...
[2018-09-03 07:27:40.321157] I [monitor(monitor):158:monitor] Monitor: starting gsyncd worker brick=/data/gluster-dist/distvol slave_node=gluster-poc-sj
[2018-09-03 07:27:40.376172] I [gsyncd(agent /data/gluster-dist/distvol):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:40.377144] I [changelogagent(agent /data/gluster-dist/distvol):72:__init__] ChangelogAgent: Agent listining...
[2018-09-03 07:27:40.378150] I [gsyncd(worker /data/gluster-dist/distvol):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:40.391185] I [resource(worker /data/gluster-dist/distvol):1377:connect_remote] SSH: Initializing SSH connection between master and slave...
[2018-09-03 07:27:43.752819] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:43.848619] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:45.365627] I [resource(worker /data/gluster-dist/distvol):1424:connect_remote] SSH: SSH connection between master and slave established. duration=4.9743
[2018-09-03 07:27:45.365866] I [resource(worker /data/gluster-dist/distvol):1096:connect] GLUSTER: Mounting gluster volume locally...
[2018-09-03 07:27:46.388974] I [resource(worker /data/gluster-dist/distvol):1119:connect] GLUSTER: Mounted gluster volume duration=1.0230
[2018-09-03 07:27:46.389206] I [subcmds(worker /data/gluster-dist/distvol):70:subcmd_worker] <top>: Worker spawn successful. Acknowledging back to monitor
[2018-09-03 07:27:48.401196] I [master(worker /data/gluster-dist/distvol):1593:register] _GMaster: Working dir path=/var/lib/misc/gluster/gsyncd/glusterdist_gluster-poc-sj_glusterdist/data-gluster-dist-distvol
[2018-09-03 07:27:48.401477] I [resource(worker /data/gluster-dist/distvol):1282:service_loop] GLUSTER: Register time time=1535959668
[2018-09-03 07:27:49.176095] I [gsyncdstatus(worker /data/gluster-dist/distvol):277:set_active] GeorepStatus: Worker Status Change status=Active
[2018-09-03 07:27:49.177079] I [gsyncdstatus(worker /data/gluster-dist/distvol):249:set_worker_crawl_status] GeorepStatus: Crawl Status Change status=History Crawl
[2018-09-03 07:27:49.177339] I [master(worker /data/gluster-dist/distvol):1507:crawl] _GMaster: starting history crawl turns=1 stime=(1535701378, 0) entry_stime=(1535701378, 0) etime=1535959669
[2018-09-03 07:27:50.179210] I [master(worker /data/gluster-dist/distvol):1536:crawl] _GMaster: slave's time stime=(1535701378, 0)
[2018-09-03 07:27:51.300096] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:51.399027] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:52.510271] I [master(worker /data/gluster-dist/distvol):1944:syncjob] Syncer: Sync Time Taken duration=1.6146 num_files=1 job=2 return_code=0
[2018-09-03 07:27:52.514487] I [master(worker /data/gluster-dist/distvol):1374:process] _GMaster: Entry Time Taken MKD=0 MKN=0 LIN=0 SYM=0 REN=1 RMD=0 CRE=0 duration=0.2745 UNL=0
[2018-09-03 07:27:52.514615] I [master(worker /data/gluster-dist/distvol):1384:process] _GMaster: Data/Metadata Time Taken SETA=1 SETX=0 meta_duration=0.2691 data_duration=1.7883 DATA=1 XATT=0
[2018-09-03 07:27:52.514844] I [master(worker /data/gluster-dist/distvol):1394:process] _GMaster: Batch Completed changelog_end=1535701379entry_stime=(1535701378, 0) changelog_start=1535701379 stime=(1535701378, 0) duration=2.3353 num_changelogs=1 mode=history_changelog
[2018-09-03 07:27:52.515224] I [master(worker /data/gluster-dist/distvol):1552:crawl] _GMaster: finished history crawl endtime=1535959662 stime=(1535701378, 0) entry_stime=(1535701378, 0)
[2018-09-03 07:28:01.706876] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:28:01.803858] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:28:03.521949] I [master(worker /data/gluster-dist/distvol):1507:crawl] _GMaster: starting history crawl turns=2 stime=(1535701378, 0) entry_stime=(1535701378, 0) etime=1535959683
[2018-09-03 07:28:03.523086] I [master(worker /data/gluster-dist/distvol):1552:crawl] _GMaster: finished history crawl endtime=1535959677 stime=(1535701378, 0) entry_stime=(1535701378, 0)
[2018-09-03 07:28:04.62274] I [gsyncdstatus(worker /data/gluster-dist/distvol):249:set_worker_crawl_status] GeorepStatus: Crawl Status Change status=Changelog Crawl
[***@gluster-poc-noida distvol]#

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Monday, September 3, 2018 12:44 PM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
The log is not complete. If you are re-trying, could you please try it out on 4.1.3 and share the logs.
Thanks,
Kotresh HR

On Mon, Sep 3, 2018 at 12:42 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Please find the log files attached.

Request you to please have a look.

/Krishna



From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Monday, September 3, 2018 10:19 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
Indexing is the feature used by Hybrid crawl which only makes crawl faster. It has nothing to do with missing data sync.
Could you please share the complete log file of the session where the issue is encountered ?
Thanks,
Kotresh HR

On Mon, Sep 3, 2018 at 9:33 AM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh/Support,

Request your help to get it fix. My slave is not getting sync with master. When I restart the session after doing the indexing off then only it shows the file at slave but that is also blank with zero size.

At master: file size is 5.8 GB.

[***@gluster-poc-noida distvol]# du -sh 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
5.8G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[***@gluster-poc-noida distvol]#

But at slave, after doing the “indexing off” and restart the session and then wait for 2 days. It shows only 4.9 GB copied.

[***@gluster-poc-sj distvol]# du -sh 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
4.9G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[***@gluster-poc-sj distvol]#

Similarly, I tested for small file of size 1.2 GB only that is still showing “0” size at slave after days waiting time.

At Master:

[***@gluster-poc-noida distvol]# du -sh rflowTestInt18.08-b001.t.Z
1.2G rflowTestInt18.08-b001.t.Z
[***@gluster-poc-noida distvol]#

At Slave:

[***@gluster-poc-sj distvol]# du -sh rflowTestInt18.08-b001.t.Z
0 rflowTestInt18.08-b001.t.Z
[***@gluster-poc-sj distvol]#

Below is my distributed volume info :

[***@gluster-poc-noida distvol]# gluster volume info glusterdist

Volume Name: glusterdist
Type: Distribute
Volume ID: af5b2915-7170-4b5e-aee8-7e68757b9bf1
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster-dist/distvol
Brick2: noi-poc-gluster:/data/gluster-dist/distvol
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
transport.address-family: inet
nfs.disable: on
[***@gluster-poc-noida distvol]#

Please help to fix, I believe its not a normal behavior of gluster rsync.

/Krishna
From: Krishna Verma
Sent: Friday, August 31, 2018 12:42 PM
To: 'Kotresh Hiremath Ravishankar' <***@redhat.com<mailto:***@redhat.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: RE: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

Hi Kotresh,

I have tested the geo replication over distributed volumes with 2*2 gluster setup.

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj Active Changelog Crawl 2018-08-31 10:28:19
noi-poc-gluster glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj2 Active History Crawl N/A
[***@gluster-poc-noida ~]#

Not at client I copied a 848MB file from local disk to master mounted volume and it took only 1 minute and 15 seconds. Its great
.

But even after waited for 2 hrs I was unable to see that file at slave site. Then I again erased the indexing by doing “gluster volume set glusterdist indexing off” and restart the session. Magically I received the file instantly at slave after doing this.

Why I need to do “indexing off” every time to reflect data at slave site? Is there any fix/workaround of it?

/Krishna


From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Friday, August 31, 2018 10:10 AM
To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 3:51 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Yes, this include the time take to write 1GB file to master. geo-rep was not stopped while the data was copying to master.

This way, you can't really measure how much time geo-rep took.


But now I am trouble, My putty session was timed out while copying data to master and geo replication was active. After I restart putty session My Master data is not syncing with slave. Its Last_synced time is 1hrs behind the current time.

I restart the geo rep and also delete and again create the session but its “LAST_SYNCED” time is same.

Unless, geo-rep is Faulty, it would be processing/syncing. You should check logs for any errors.


Please help in this.


. It's better if gluster volume has more distribute count like 3*3 or 4*3 :- Are you refereeing to create a distributed volume with 3 master node and 3 slave node?

Yes, that's correct. Please do the test with this. I recommend you to run the actual workload for which you are planning to use gluster instead of copying 1GB file and testing.



/krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Thursday, August 30, 2018 3:20 PM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 1:52 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

After fix the library link on node "noi-poc-gluster ", the status of one mater node is “Active” and another is “Passive”. Can I setup both the master as “Active” ?

Nope, since it's replica, it's redundant to sync same files from two nodes. Both replicas can't be Active.


Also, when I copy a 1GB size of file from gluster client to master gluster volume which is replicated with the slave volume, it tooks 35 minutes and 49 seconds. Is there any way to reduce its time taken to rsync data.

How did you measure this time? Does this include the time take for you to write 1GB file to master?
There are two aspects to consider while measuring this.

1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.

In your case, since the setup is 1*2 and only one geo-rep worker is Active, Step2 above equals to time for step1 + network transfer time.

You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master. It's one way.
2. Or stop geo-rep until the 1GB file is written to master and then start geo-rep to get actual geo-rep time.

To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like 3*3 or 4*3
It will help in two ways.
1. The files gets distributed on master to multiple bricks
2. So above will help geo-rep as files on multiple bricks are synced in parallel (multiple Actives)

NOTE: Gluster master server and one client is in Noida, India Location.
Gluster Slave server and one client is in USA.

Our approach is to transfer data from Noida gluster client will reach to the USA gluster client in a minimum time. Please suggest the best approach to achieve it.

[***@noi-dcops ~]# date ; rsync -avh --progress /tmp/gentoo_root.img /glusterfs/ ; date
Thu Aug 30 12:26:26 IST 2018
sending incremental file list
gentoo_root.img
1.07G 100% 490.70kB/s 0:35:36 (xfr#1, to-chk=0/1)

Is this I/O time to write to master volume?

sent 1.07G bytes received 35 bytes 499.65K bytes/sec
total size is 1.07G speedup is 1.00
Thu Aug 30 13:02:15 IST 2018
[***@noi-dcops ~]#



[***@gluster-poc-noida gluster]# gluster volume geo-replication status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root ssh://gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-30 13:42:18
noi-poc-gluster glusterep /data/gluster/gv0 root ssh://gluster-poc-sj::glusterep gluster-poc-sj Passive N/A N/A
[***@gluster-poc-noida gluster]#

Thanks in advance for your all time support.

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Thursday, August 30, 2018 10:51 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.
-Kotresh HR

On Thu, Aug 30, 2018 at 12:17 AM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Thank you so much for you input. Geo-replication is now showing “Active” atleast for 1 master node. But its still at faulty state for the 2nd master server.

Below is the detail.

[***@gluster-poc-noida glusterfs]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-29 23:56:06
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A


[***@gluster-poc-noida glusterfs]# gluster volume status
Status of volume: glusterep
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster-poc-noida:/data/gluster/gv0 49152 0 Y 22463
Brick noi-poc-gluster:/data/gluster/gv0 49152 0 Y 19471
Self-heal Daemon on localhost N/A N/A Y 32087
Self-heal Daemon on noi-poc-gluster N/A N/A Y 6272

Task Status of Volume glusterep
------------------------------------------------------------------------------
There are no active volume tasks



[***@gluster-poc-noida glusterfs]# gluster volume info

Volume Name: glusterep
Type: Replicate
Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster/gv0
Brick2: noi-poc-gluster:/data/gluster/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
[***@gluster-poc-noida glusterfs]#

Could you please help me in that also please?

It would be really a great help from your side.

/Krishna
From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Wednesday, August 29, 2018 10:47 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Answer inline

On Tue, Aug 28, 2018 at 4:28 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

I created the links before. Below is the detail.

[***@gluster-poc-noida ~]# ls -l /usr/lib64 | grep libgfch
lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1

The link created is pointing to wrong library. Please fix this

#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so

lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]# locate libgfchangelog.so
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]#

Is it looks good what we exactly need or di I need to create any more link or How to get “libgfchangelog.so” file if missing.

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:22 PM
To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which is what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"

It should be something like below.

#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so -> libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1

On Tue, Aug 28, 2018 at 4:04 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Thanks for the response, I did that also but nothing changed.

[***@gluster-poc-noida ~]# ldconfig /usr/lib64
[***@gluster-poc-noida ~]# ldconfig -p | grep libgfchangelog
libgfchangelog.so.0 (libc6,x86-64) => /usr/lib64/libgfchangelog.so.0
[***@gluster-poc-noida ~]#

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
[***@gluster-poc-noida ~]#

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:00 PM
To: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>
Cc: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog


On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar <***@redhat.com<mailto:***@redhat.com>> wrote:
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for geo-replication. But again same errors in the logs.
I have attaching the config files and logs here.
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Deleting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Creating geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not can you please apply that.
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0 /usr/lib64/libgfchangelog.so.
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the same error.
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] RepceClient: call failed call=26469:139794524604224:1535440781.44 method=init error=OSError
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
Gluster-***@gluster.org<mailto:Gluster-***@gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-users<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
Kotresh Hiremath Ravishankar
2018-09-06 07:51:21 UTC
Permalink
Could you append something to this file and check whether it gets synced
now?
Post by Krishna Verma
Hi Kotresh,
Did you get a chance to look into this?
For replicated gluster volume, Still Master is not getting sync with slave.
1.2G /repvol/rflowTestInt18.08-b001.t.Z
du: cannot access ‘/repvol/rflowTestInt18.08-b001.t.Z’: No such file or directory
File not reached at slave.
/Krishna
*From:* Krishna Verma
*Sent:* Monday, September 3, 2018 4:41 PM
*Subject:* RE: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
Gluster Master Site Servers : gluster-poc-noida and noi-poc-gluster
Gluster Slave site servers: gluster-poc-sj and gluster-poc-sj2
Master Client : noi-foreman02
Slave Client: sj-kverma
Step1: Create a LVM partition of 10 GB on all 4 Gluster nodes (2 Master)
* (2 slave) and format that in ext4 filesystem and mount that on server.
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-gluster--vol--dist ext4 9.8G 847M 8.4G 9% /data/gluster-dist
Number of Peers: 1
Hostname: noi-poc-gluster
Uuid: 01316459-b5c8-461d-ad25-acc17a82e78f
State: Peer in Cluster (Connected)
Number of Peers: 1
Hostname: gluster-poc-sj2
Uuid: 6ba85bfe-cd74-4a76-a623-db687f7136fa
State: Peer in Cluster (Connected)
At Master: “gluster volume create glusterdist gluster-poc-noida:/data/gluster-dist/distvol
noi-poc-gluster:/data/gluster-dist/distvol”
Volume Name: glusterdist
Type: Distribute
Volume ID: af5b2915-7170-4b5e-aee8-7e68757b9bf1
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Brick1: gluster-poc-noida:/data/gluster-dist/distvol
Brick2: noi-poc-gluster:/data/gluster-dist/distvol
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
transport.address-family: inet
nfs.disable: on
At Slave “ gluster volume create glusterdist gluster-poc-sj:/data/gluster-dist/distvol
gluster-poc-sj2:/data/gluster-dist/distvol”
Volume Name: glusterdist
Type: Distribute
Volume ID: a982da53-a3d7-4b5a-be77-df85f584610d
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Brick1: gluster-poc-sj:/data/gluster-dist/distvol
Brick2: gluster-poc-sj2:/data/gluster-dist/distvol
transport.address-family: inet
nfs.disable: on
Step 4 : Gluster Geo Replication configuration
On all Gluster node: “yum install glusterfs-geo-replication.x86_64”
*ssh-keygen*
*cp /root/.ssh/id_rsa.pub /var/lib/glusterd/geo-replication/secret.pem.pub*
*scp /var/lib/glusterd/geo-replication/secret.pem*
*On Slave Node: *
*ln -s /usr/libexec/glusterfs/gsyncd
/nonexistent/gsyncd *
gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist
create push-pem force
gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist start
glusterdist gluster-poc-sj::glusterdist status
MASTER NODE MASTER VOL MASTER BRICK SLAVE
USER SLAVE SLAVE NODE STATUS CRAWL
STATUS LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
---------------------------------------------------------
gluster-poc-noida glusterdist /data/gluster-dist/distvol
root gluster-poc-sj::glusterdist gluster-poc-sj Active
Changelog Crawl 2018-08-31 13:12:58
noi-poc-gluster glusterdist /data/gluster-dist/distvol
root gluster-poc-sj::glusterdist gluster-poc-sj2 Active
History Crawl N/A
yum install -y glusterfs-client
mkdir /distvol
mount -t glusterfs gluster-poc-noida:/glusterdist /distvol
Filesystem Type Size Used Avail Use% Mounted on
gluster-poc-noida:/glusterdist fuse.glusterfs 20G 9.6G 9.1G 52% /distvol
yum install -y glusterfs-client
mkdir /distvol
mount -t glusterfs gluster-poc-sj:/glusterdist /distvol
17020_GPLV3.tar.gz
5.8G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
But from last three days it synced to the slave only 5.4GB
17020_GPLV3.tar.gz
5.4G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
I have also tested a another file of size 1 GB only copied from master
client and that is still shows 0 size at slave client after 3 days.
/Krishna
*Sent:* Monday, September 3, 2018 3:17 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi krishna,
I see no error in the shared logs. The only errro messages I see are
during geo-rep stop. That is expected.
Could you share the steps you used to created geo-rep setup?
Thanks,
Kotresh HR
Hi Kotesh,
Below is the cat output of gsyncd.log file generating on my master server.
And I am using 4.1.3 version only all my gluster nodes.
glusterfs 4.1.3
replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.log
[2018-09-03 04:01:52.424609] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 04:01:52.526323] I [gsyncd(status):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-
replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:55:41.326411] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:55:49.676120] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:55:50.406042] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:56:52.847537] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:57:03.778448] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:57:25.86958] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:57:25.855273] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:58:09.294239] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:59:39.255487] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:59:39.355753] I [gsyncd(status):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-
replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:00:26.311767] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:03:29.205226] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:03:30.131258] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:10:34.679677] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:10:35.653928] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:26:24.438854] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:26:25.495117] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.159113] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.216475] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.932451] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.988286] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.992789] E [syncdutils(worker
/data/gluster-dist/distvol):305:log_raise_exception] <top>: connection to
peer is broken
[2018-09-03 07:27:26.994750] E [syncdutils(worker
/data/gluster-dist/distvol):801:errlog] Popen: command returned error
cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto
-S /tmp/gsyncd-aux-ssh-X8iHv1/86bcbaf188167a3859c3267081671312.sock
gluster-poc-sj /nonexistent/gsyncd slave glusterdist
gluster-poc-sj::glusterdist --master-node gluster-poc-noida
--master-node-id 098c16c6-8dff-490a-a2e8-c8cb328fcbb3 --master-brick
/data/gluster-dist/distvol --local-node gluster-poc-sj --local-node-id
e54f2759-4c56-40dd-89e1-e10c3037d48b --slave-timeout 120
--slave-log-level INFO --slave-gluster-log-level INFO
--slave-gluster-command-dir /usr/local/sbin/ error=255
[2018-09-03 07:27:26.994971] E [syncdutils(worker
/data/gluster-dist/distvol):805:logerr] Popen: ssh> Killed by signal 15.
[2018-09-03 07:27:27.7174] I [repce(agent /data/gluster-dist/distvol):80:service_loop]
RepceServer: terminating on reaching EOF.
[2018-09-03 07:27:27.15156] I [gsyncdstatus(monitor):244:set_worker_status]
GeorepStatus: Worker Status Change status=Faulty
Using session config file path=/var/lib/glusterd/geo-
replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:28.64521] I [subcmds(monitor-status):19:subcmd_monitor_status]
<top>: Monitor Status Change status=Stopped
[2018-09-03 07:27:35.345937] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:35.444247] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:36.181122] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:36.281459] I [gsyncd(monitor):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-
replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:39.782480] I [gsyncdstatus(monitor):244:set_worker_status]
GeorepStatus: Worker Status Change status=Initializing...
starting gsyncd worker brick=/data/gluster-dist/distvol
slave_node=gluster-poc-sj
[2018-09-03 07:27:40.376172] I [gsyncd(agent /data/gluster-dist/distvol):297:main]
<top>: Using session config file path=/var/lib/glusterd/geo-
replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:40.377144] I [changelogagent(agent
/data/gluster-dist/distvol):72:__init__] ChangelogAgent: Agent
listining...
[2018-09-03 07:27:40.378150] I [gsyncd(worker /data/gluster-dist/distvol):297:main]
<top>: Using session config file path=/var/lib/glusterd/geo-
replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:40.391185] I [resource(worker
/data/gluster-dist/distvol):1377:connect_remote] SSH: Initializing SSH
connection between master and slave...
[2018-09-03 07:27:43.752819] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:43.848619] I [gsyncd(status):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-
replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:45.365627] I [resource(worker
/data/gluster-dist/distvol):1424:connect_remote] SSH: SSH connection
between master and slave established. duration=4.9743
[2018-09-03 07:27:45.365866] I [resource(worker
/data/gluster-dist/distvol):1096:connect] GLUSTER: Mounting gluster
volume locally...
[2018-09-03 07:27:46.388974] I [resource(worker
/data/gluster-dist/distvol):1119:connect] GLUSTER: Mounted gluster
volume duration=1.0230
[2018-09-03 07:27:46.389206] I [subcmds(worker /data/gluster-dist/distvol):70:subcmd_worker]
<top>: Worker spawn successful. Acknowledging back to monitor
[2018-09-03 07:27:48.401196] I [master(worker /data/gluster-dist/distvol):1593:register]
_GMaster: Working dir path=/var/lib/misc/gluster/
gsyncd/glusterdist_gluster-poc-sj_glusterdist/data-gluster-dist-distvol
[2018-09-03 07:27:48.401477] I [resource(worker
/data/gluster-dist/distvol):1282:service_loop] GLUSTER: Register time
time=1535959668
[2018-09-03 07:27:49.176095] I [gsyncdstatus(worker
/data/gluster-dist/distvol):277:set_active] GeorepStatus: Worker Status
Change status=Active
[2018-09-03 07:27:49.177079] I [gsyncdstatus(worker
Crawl Status Change status=History Crawl
[2018-09-03 07:27:49.177339] I [master(worker /data/gluster-dist/distvol):1507:crawl]
_GMaster: starting history crawl turns=1 stime=(1535701378, 0)
entry_stime=(1535701378, 0) etime=1535959669
[2018-09-03 07:27:50.179210] I [master(worker /data/gluster-dist/distvol):1536:crawl]
_GMaster: slave's time stime=(1535701378, 0)
[2018-09-03 07:27:51.300096] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:51.399027] I [gsyncd(status):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-
replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:52.510271] I [master(worker /data/gluster-dist/distvol):1944:syncjob]
Syncer: Sync Time Taken duration=1.6146 num_files=1 job=2
return_code=0
[2018-09-03 07:27:52.514487] I [master(worker /data/gluster-dist/distvol):1374:process]
_GMaster: Entry Time Taken MKD=0 MKN=0 LIN=0 SYM=0 REN=1
RMD=0 CRE=0 duration=0.2745 UNL=0
[2018-09-03 07:27:52.514615] I [master(worker /data/gluster-dist/distvol):1384:process]
_GMaster: Data/Metadata Time Taken SETA=1 SETX=0
meta_duration=0.2691 data_duration=1.7883 DATA=1 XATT=0
[2018-09-03 07:27:52.514844] I [master(worker /data/gluster-dist/distvol):1394:process]
_GMaster: Batch Completed changelog_end=1535701379entry_stime=(1535701378,
0) changelog_start=1535701379 stime=(1535701378, 0)
duration=2.3353 num_changelogs=1 mode=history_changelog
[2018-09-03 07:27:52.515224] I [master(worker /data/gluster-dist/distvol):1552:crawl]
_GMaster: finished history crawl endtime=1535959662
stime=(1535701378, 0) entry_stime=(1535701378, 0)
[2018-09-03 07:28:01.706876] I [gsyncd(config-get):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-replication/glusterdist_
gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:28:01.803858] I [gsyncd(status):297:main] <top>: Using
session config file path=/var/lib/glusterd/geo-
replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:28:03.521949] I [master(worker /data/gluster-dist/distvol):1507:crawl]
_GMaster: starting history crawl turns=2 stime=(1535701378, 0)
entry_stime=(1535701378, 0) etime=1535959683
[2018-09-03 07:28:03.523086] I [master(worker /data/gluster-dist/distvol):1552:crawl]
_GMaster: finished history crawl endtime=1535959677
stime=(1535701378, 0) entry_stime=(1535701378, 0)
[2018-09-03 07:28:04.62274] I [gsyncdstatus(worker
Crawl Status Change status=Changelog Crawl
*Sent:* Monday, September 3, 2018 12:44 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
The log is not complete. If you are re-trying, could you please try it out
on 4.1.3 and share the logs.
Thanks,
Kotresh HR
Hi Kotresh,
Please find the log files attached.
Request you to please have a look.
/Krishna
*Sent:* Monday, September 3, 2018 10:19 AM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
Indexing is the feature used by Hybrid crawl which only makes crawl
faster. It has nothing to do with missing data sync.
Could you please share the complete log file of the session where the
issue is encountered ?
Thanks,
Kotresh HR
Hi Kotresh/Support,
Request your help to get it fix. My slave is not getting sync with master.
When I restart the session after doing the indexing off then only it shows
the file at slave but that is also blank with zero size.
At master: file size is 5.8 GB.
17020_GPLV3.tar.gz
5.8G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
But at slave, after doing the “indexing off” and restart the session and
then wait for 2 days. It shows only 4.9 GB copied.
17020_GPLV3.tar.gz
4.9G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
Similarly, I tested for small file of size 1.2 GB only that is still
showing “0” size at slave after days waiting time.
1.2G rflowTestInt18.08-b001.t.Z
0 rflowTestInt18.08-b001.t.Z
Volume Name: glusterdist
Type: Distribute
Volume ID: af5b2915-7170-4b5e-aee8-7e68757b9bf1
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Brick1: gluster-poc-noida:/data/gluster-dist/distvol
Brick2: noi-poc-gluster:/data/gluster-dist/distvol
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
transport.address-family: inet
nfs.disable: on
Please help to fix, I believe its not a normal behavior of gluster rsync.
/Krishna
*From:* Krishna Verma
*Sent:* Friday, August 31, 2018 12:42 PM
*Subject:* RE: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
Hi Kotresh,
I have tested the geo replication over distributed volumes with 2*2 gluster setup.
gluster-poc-sj::glusterdist status
MASTER NODE MASTER VOL MASTER BRICK SLAVE
USER SLAVE SLAVE NODE STATUS CRAWL
STATUS LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
---------------------------------------------------------
gluster-poc-noida glusterdist /data/gluster-dist/distvol
root gluster-poc-sj::glusterdist gluster-poc-sj Active
Changelog Crawl 2018-08-31 10:28:19
noi-poc-gluster glusterdist /data/gluster-dist/distvol
root gluster-poc-sj::glusterdist gluster-poc-sj2 Active
History Crawl N/A
Not at client I copied a 848MB file from local disk to master mounted
volume and it took only 1 minute and 15 seconds. Its great
.
But even after waited for 2 hrs I was unable to see that file at slave
site. Then I again erased the indexing by doing “gluster volume set
glusterdist indexing off” and restart the session. Magically I received
the file instantly at slave after doing this.
Why I need to do “indexing off” every time to reflect data at slave site?
Is there any fix/workaround of it?
/Krishna
*Sent:* Friday, August 31, 2018 10:10 AM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Kotresh,
Yes, this include the time take to write 1GB file to master. geo-rep was
not stopped while the data was copying to master.
This way, you can't really measure how much time geo-rep took.
But now I am trouble, My putty session was timed out while copying data to
master and geo replication was active. After I restart putty session My
Master data is not syncing with slave. Its Last_synced time is 1hrs behind
the current time.
I restart the geo rep and also delete and again create the session but its
“LAST_SYNCED” time is same.
Unless, geo-rep is Faulty, it would be processing/syncing. You should
check logs for any errors.
Please help in this.

. It's better if gluster volume has more distribute count like 3*3 or
4*3 :- Are you refereeing to create a distributed volume with 3 master
node and 3 slave node?
Yes, that's correct. Please do the test with this. I recommend you to run
the actual workload for which you are planning to use gluster instead of
copying 1GB file and testing.
/krishna
*Sent:* Thursday, August 30, 2018 3:20 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Kotresh,
After fix the library link on node "noi-poc-gluster ", the status of one
mater node is “Active” and another is “Passive”. Can I setup both the
master as “Active” ?
Nope, since it's replica, it's redundant to sync same files from two
nodes. Both replicas can't be Active.
Also, when I copy a 1GB size of file from gluster client to master gluster
volume which is replicated with the slave volume, it tooks 35 minutes and
49 seconds. Is there any way to reduce its time taken to rsync data.
How did you measure this time? Does this include the time take for you to
write 1GB file to master?
There are two aspects to consider while measuring this.
1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.
In your case, since the setup is 1*2 and only one geo-rep worker is
Active, Step2 above equals to time for step1 + network transfer time.
You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master. It's one way.
2. Or stop geo-rep until the 1GB file is written to master and then start
geo-rep to get actual geo-rep time.
To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like 3*3 or 4*3
It will help in two ways.
1. The files gets distributed on master to multiple bricks
2. So above will help geo-rep as files on multiple bricks are
synced in parallel (multiple Actives)
NOTE: Gluster master server and one client is in Noida, India Location.
Gluster Slave server and one client is in USA.
Our approach is to transfer data from Noida gluster client will reach to
the USA gluster client in a minimum time. Please suggest the best approach
to achieve it.
Thu Aug 30 12:26:26 IST 2018
sending incremental file list
gentoo_root.img
1.07G 100% 490.70kB/s 0:35:36 (xfr#1, to-chk=0/1)
Is this I/O time to write to master volume?
sent 1.07G bytes received 35 bytes 499.65K bytes/sec
total size is 1.07G speedup is 1.00
Thu Aug 30 13:02:15 IST 2018
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL
STATUS LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
---------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
ssh://gluster-poc-sj::glusterep gluster-poc-sj Active Changelog
Crawl 2018-08-30 13:42:18
noi-poc-gluster glusterep /data/gluster/gv0 root
ssh://gluster-poc-sj::glusterep gluster-poc-sj Passive
N/A N/A
Thanks in advance for your all time support.
/Krishna
*Sent:* Thursday, August 30, 2018 10:51 AM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.
-Kotresh HR
Hi Kotresh,
Thank you so much for you input. Geo-replication is now showing “Active”
atleast for 1 master node. But its still at faulty state for the 2nd
master server.
Below is the detail.
glusterep gluster-poc-sj::glusterep status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
--------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl
2018-08-29 23:56:06
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A
N/A
Status of volume: glusterep
Gluster process TCP Port RDMA Port Online
Pid
------------------------------------------------------------
------------------
Brick gluster-poc-noida:/data/gluster/gv0 49152 0 Y
22463
Brick noi-poc-gluster:/data/gluster/gv0 49152 0 Y
19471
Self-heal Daemon on localhost N/A N/A Y
32087
Self-heal Daemon on noi-poc-gluster N/A N/A Y
6272
Task Status of Volume glusterep
------------------------------------------------------------
------------------
There are no active volume tasks
Volume Name: glusterep
Type: Replicate
Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: gluster-poc-noida:/data/gluster/gv0
Brick2: noi-poc-gluster:/data/gluster/gv0
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
Could you please help me in that also please?
It would be really a great help from your side.
/Krishna
*Sent:* Wednesday, August 29, 2018 10:47 AM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Answer inline
Hi Kotresh,
I created the links before. Below is the detail.
lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so ->
/usr/lib64/libgfchangelog.so.1
The link created is pointing to wrong library. Please fix this
#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so
lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 ->
libgfchangelog.so.0.0.1
-rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
Is it looks good what we exactly need or di I need to create any more link
or How to get “libgfchangelog.so” file if missing.
/Krishna
*Sent:* Tuesday, August 28, 2018 4:22 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which
is what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is
present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"
It should be something like below.
#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la
<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so -> libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1
Hi Kotresh,
Thanks for the response, I did that also but nothing changed.
libgfchangelog.so.0 (libc6,x86-64) =>
/usr/lib64/libgfchangelog.so.0
gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
-----------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty
N/A N/A
/Krishna
*Sent:* Tuesday, August 28, 2018 4:00 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for
geo-replication. But again same errors in the logs.
Post by Krishna Verma
I have attaching the config files and logs here.
gluster-poc-sj::glusterep stop
Post by Krishna Verma
Stopping geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep delete
Post by Krishna Verma
Deleting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep create push-pem force
Post by Krishna Verma
Creating geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep start
Post by Krishna Verma
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
gluster-poc-sj::glusterep start
Post by Krishna Verma
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
syncdaemon/repce.py
gluster-poc-sj::glusterep start
Post by Krishna Verma
Starting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
Post by Krishna Verma
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
Post by Krishna Verma
------------------------------------------------------------
------------------------------------------------------------
-----------------------------
Post by Krishna Verma
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
Post by Krishna Verma
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
Post by Krishna Verma
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
Post by Krishna Verma
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.
gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=
aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_
6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=
fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not
can you please apply that.
Post by Krishna Verma
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.
Post by Krishna Verma
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the
same error.
Post by Krishna Verma
Post by Krishna Verma
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker
/data/gluster/gv0):197:__call__] RepceClient: call failed
call=26469:139794524604224:1535440781.44 method=init
error=OSError
Post by Krishna Verma
Post by Krishna Verma
Post by Krishna Verma
[2018-08-28 07:19:41.447041] E [syncdutils(worker
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent
/data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching
EOF.
Post by Krishna Verma
Post by Krishna Verma
Post by Krishna Verma
[2018-08-28 07:19:42.440184] I [monitor(monitor):272:monitor]
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
Krishna Verma
2018-09-06 08:11:53 UTC
Permalink
Hi Kotresh,

Yes it was get replicated at slave, but it’s too slow as it tool 2 days to replicate 1.2GB of file from master to slave.

I think it should not be normal.

Anything that I can do to increase it performance?

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com>
Sent: Thursday, September 6, 2018 1:21 PM
To: Krishna Verma <***@cadence.com>
Cc: Sunny Kumar <***@redhat.com>; Gluster Users <gluster-***@gluster.org>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Could you append something to this file and check whether it gets synced now?

On Thu, Sep 6, 2018 at 9:08 AM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Did you get a chance to look into this?

For replicated gluster volume, Still Master is not getting sync with slave.

At Master :
[***@gluster-poc-noida ~]# du -sh /repvol/rflowTestInt18.08-b001.t.Z
1.2G /repvol/rflowTestInt18.08-b001.t.Z
[***@gluster-poc-noida ~]#

At Slave:
[***@gluster-poc-sj ~]# du -sh /repvol/rflowTestInt18.08-b001.t.Z
du: cannot access ‘/repvol/rflowTestInt18.08-b001.t.Z’: No such file or directory
[***@gluster-poc-sj ~]#

File not reached at slave.

/Krishna

From: Krishna Verma
Sent: Monday, September 3, 2018 4:41 PM
To: 'Kotresh Hiremath Ravishankar' <***@redhat.com<mailto:***@redhat.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: RE: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

Hi Kotesh:

Gluster Master Site Servers : gluster-poc-noida and noi-poc-gluster
Gluster Slave site servers: gluster-poc-sj and gluster-poc-sj2

Master Client : noi-foreman02
Slave Client: sj-kverma

Step1: Create a LVM partition of 10 GB on all 4 Gluster nodes (2 Master) * (2 slave) and format that in ext4 filesystem and mount that on server.

[***@gluster-poc-noida distvol]# df -hT /data/gluster-dist
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/centos-gluster--vol--dist ext4 9.8G 847M 8.4G 9% /data/gluster-dist
[***@gluster-poc-noida distvol]#

Step 2: Created a Trusted storage pool as below:

At Master:
[***@gluster-poc-noida distvol]# gluster peer status
Number of Peers: 1

Hostname: noi-poc-gluster
Uuid: 01316459-b5c8-461d-ad25-acc17a82e78f
State: Peer in Cluster (Connected)
[***@gluster-poc-noida distvol]#

At Slave:
[***@gluster-poc-sj ~]# gluster peer status
Number of Peers: 1

Hostname: gluster-poc-sj2
Uuid: 6ba85bfe-cd74-4a76-a623-db687f7136fa
State: Peer in Cluster (Connected)
[***@gluster-poc-sj ~]#

Step 3: Created distributed volume as below:

At Master: “gluster volume create glusterdist gluster-poc-noida:/data/gluster-dist/distvol noi-poc-gluster:/data/gluster-dist/distvol”

[***@gluster-poc-noida distvol]# gluster volume info glusterdist

Volume Name: glusterdist
Type: Distribute
Volume ID: af5b2915-7170-4b5e-aee8-7e68757b9bf1
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster-dist/distvol
Brick2: noi-poc-gluster:/data/gluster-dist/distvol
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
transport.address-family: inet
nfs.disable: on
[***@gluster-poc-noida distvol]#

At Slave “ gluster volume create glusterdist gluster-poc-sj:/data/gluster-dist/distvol gluster-poc-sj2:/data/gluster-dist/distvol”

Volume Name: glusterdist
Type: Distribute
Volume ID: a982da53-a3d7-4b5a-be77-df85f584610d
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-sj:/data/gluster-dist/distvol
Brick2: gluster-poc-sj2:/data/gluster-dist/distvol
Options Reconfigured:
transport.address-family: inet
nfs.disable: on

Step 4 : Gluster Geo Replication configuration

On all Gluster node: “yum install glusterfs-geo-replication.x86_64”
On master node where I created session:
ssh-keygen
ssh-copy-id ***@gluster-poc-sj
cp /root/.ssh/id_rsa.pub /var/lib/glusterd/geo-replication/secret.pem.pub
scp /var/lib/glusterd/geo-replication/secret.pem* ***@gluster-poc-sj:/var/lib/glusterd/geo-replication/

On Slave Node:

ln -s /usr/libexec/glusterfs/gsyncd /nonexistent/gsyncd

On Master Node:

gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist create push-pem force
gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist start

[***@gluster-poc-noida distvol]# gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj Active Changelog Crawl 2018-08-31 13:12:58
noi-poc-gluster glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj2 Active History Crawl N/A
[***@gluster-poc-noida distvol]#

On Gluster Client Node at master Site:

yum install -y glusterfs-client
mkdir /distvol
mount -t glusterfs gluster-poc-noida:/glusterdist /distvol
[***@noi-foreman02 ~]# df -hT /distvol
Filesystem Type Size Used Avail Use% Mounted on
gluster-poc-noida:/glusterdist fuse.glusterfs 20G 9.6G 9.1G 52% /distvol
[***@noi-foreman02 ~]#

On Gluster Client at Slave site:
yum install -y glusterfs-client
mkdir /distvol
mount -t glusterfs gluster-poc-sj:/glusterdist /distvol

Now To Test the Geo Replication Setup:

I have copied below file from client at Master site:
[***@noi-foreman02 distvol]# du -sh 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
5.8G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[***@noi-foreman02 distvol]#

But from last three days it synced to the slave only 5.4GB
[***@sj-kverma distvol]# du -sh 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
5.4G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[***@sj-kverma distvol]#

I have also tested a another file of size 1 GB only copied from master client and that is still shows 0 size at slave client after 3 days.

/Krishna



From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Monday, September 3, 2018 3:17 PM
To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi krishna,
I see no error in the shared logs. The only errro messages I see are during geo-rep stop. That is expected.
Could you share the steps you used to created geo-rep setup?
Thanks,
Kotresh HR

On Mon, Sep 3, 2018 at 1:02 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotesh,

Below is the cat output of gsyncd.log file generating on my master server.

And I am using 4.1.3 version only all my gluster nodes.
[***@gluster-poc-noida distvol]# gluster --version | grep glusterfs
glusterfs 4.1.3


[***@gluster-poc-noida distvol]# cat /var/log/glusterfs/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.log
[2018-09-03 04:01:52.424609] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 04:01:52.526323] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:55:41.326411] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:55:49.676120] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:55:50.406042] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:56:52.847537] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:57:03.778448] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:57:25.86958] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:57:25.855273] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:58:09.294239] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:59:39.255487] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 06:59:39.355753] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:00:26.311767] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:03:29.205226] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:03:30.131258] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:10:34.679677] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:10:35.653928] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:26:24.438854] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:26:25.495117] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.159113] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.216475] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.932451] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.988286] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:26.992789] E [syncdutils(worker /data/gluster-dist/distvol):305:log_raise_exception] <top>: connection to peer is broken
[2018-09-03 07:27:26.994750] E [syncdutils(worker /data/gluster-dist/distvol):801:errlog] Popen: command returned error cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-X8iHv1/86bcbaf188167a3859c3267081671312.sock gluster-poc-sj /nonexistent/gsyncd slave glusterdist gluster-poc-sj::glusterdist --master-node gluster-poc-noida --master-node-id 098c16c6-8dff-490a-a2e8-c8cb328fcbb3 --master-brick /data/gluster-dist/distvol --local-node gluster-poc-sj --local-node-id e54f2759-4c56-40dd-89e1-e10c3037d48b --slave-timeout 120 --slave-log-level INFO --slave-gluster-log-level INFO --slave-gluster-command-dir /usr/local/sbin/ error=255
[2018-09-03 07:27:26.994971] E [syncdutils(worker /data/gluster-dist/distvol):805:logerr] Popen: ssh> Killed by signal 15.
[2018-09-03 07:27:27.7174] I [repce(agent /data/gluster-dist/distvol):80:service_loop] RepceServer: terminating on reaching EOF.
[2018-09-03 07:27:27.15156] I [gsyncdstatus(monitor):244:set_worker_status] GeorepStatus: Worker Status Change status=Faulty
[2018-09-03 07:27:28.52725] I [gsyncd(monitor-status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:28.64521] I [subcmds(monitor-status):19:subcmd_monitor_status] <top>: Monitor Status Change status=Stopped
[2018-09-03 07:27:35.345937] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:35.444247] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:36.181122] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:36.281459] I [gsyncd(monitor):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:39.782480] I [gsyncdstatus(monitor):244:set_worker_status] GeorepStatus: Worker Status Change status=Initializing...
[2018-09-03 07:27:40.321157] I [monitor(monitor):158:monitor] Monitor: starting gsyncd worker brick=/data/gluster-dist/distvol slave_node=gluster-poc-sj
[2018-09-03 07:27:40.376172] I [gsyncd(agent /data/gluster-dist/distvol):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:40.377144] I [changelogagent(agent /data/gluster-dist/distvol):72:__init__] ChangelogAgent: Agent listining...
[2018-09-03 07:27:40.378150] I [gsyncd(worker /data/gluster-dist/distvol):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:40.391185] I [resource(worker /data/gluster-dist/distvol):1377:connect_remote] SSH: Initializing SSH connection between master and slave...
[2018-09-03 07:27:43.752819] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:43.848619] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:45.365627] I [resource(worker /data/gluster-dist/distvol):1424:connect_remote] SSH: SSH connection between master and slave established. duration=4.9743
[2018-09-03 07:27:45.365866] I [resource(worker /data/gluster-dist/distvol):1096:connect] GLUSTER: Mounting gluster volume locally...
[2018-09-03 07:27:46.388974] I [resource(worker /data/gluster-dist/distvol):1119:connect] GLUSTER: Mounted gluster volume duration=1.0230
[2018-09-03 07:27:46.389206] I [subcmds(worker /data/gluster-dist/distvol):70:subcmd_worker] <top>: Worker spawn successful. Acknowledging back to monitor
[2018-09-03 07:27:48.401196] I [master(worker /data/gluster-dist/distvol):1593:register] _GMaster: Working dir path=/var/lib/misc/gluster/gsyncd/glusterdist_gluster-poc-sj_glusterdist/data-gluster-dist-distvol
[2018-09-03 07:27:48.401477] I [resource(worker /data/gluster-dist/distvol):1282:service_loop] GLUSTER: Register time time=1535959668
[2018-09-03 07:27:49.176095] I [gsyncdstatus(worker /data/gluster-dist/distvol):277:set_active] GeorepStatus: Worker Status Change status=Active
[2018-09-03 07:27:49.177079] I [gsyncdstatus(worker /data/gluster-dist/distvol):249:set_worker_crawl_status] GeorepStatus: Crawl Status Change status=History Crawl
[2018-09-03 07:27:49.177339] I [master(worker /data/gluster-dist/distvol):1507:crawl] _GMaster: starting history crawl turns=1 stime=(1535701378, 0) entry_stime=(1535701378, 0) etime=1535959669
[2018-09-03 07:27:50.179210] I [master(worker /data/gluster-dist/distvol):1536:crawl] _GMaster: slave's time stime=(1535701378, 0)
[2018-09-03 07:27:51.300096] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:51.399027] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:27:52.510271] I [master(worker /data/gluster-dist/distvol):1944:syncjob] Syncer: Sync Time Taken duration=1.6146 num_files=1 job=2 return_code=0
[2018-09-03 07:27:52.514487] I [master(worker /data/gluster-dist/distvol):1374:process] _GMaster: Entry Time Taken MKD=0 MKN=0 LIN=0 SYM=0 REN=1 RMD=0 CRE=0 duration=0.2745 UNL=0
[2018-09-03 07:27:52.514615] I [master(worker /data/gluster-dist/distvol):1384:process] _GMaster: Data/Metadata Time Taken SETA=1 SETX=0 meta_duration=0.2691 data_duration=1.7883 DATA=1 XATT=0
[2018-09-03 07:27:52.514844] I [master(worker /data/gluster-dist/distvol):1394:process] _GMaster: Batch Completed changelog_end=1535701379entry_stime=(1535701378, 0) changelog_start=1535701379 stime=(1535701378, 0) duration=2.3353 num_changelogs=1 mode=history_changelog
[2018-09-03 07:27:52.515224] I [master(worker /data/gluster-dist/distvol):1552:crawl] _GMaster: finished history crawl endtime=1535959662 stime=(1535701378, 0) entry_stime=(1535701378, 0)
[2018-09-03 07:28:01.706876] I [gsyncd(config-get):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:28:01.803858] I [gsyncd(status):297:main] <top>: Using session config file path=/var/lib/glusterd/geo-replication/glusterdist_gluster-poc-sj_glusterdist/gsyncd.conf
[2018-09-03 07:28:03.521949] I [master(worker /data/gluster-dist/distvol):1507:crawl] _GMaster: starting history crawl turns=2 stime=(1535701378, 0) entry_stime=(1535701378, 0) etime=1535959683
[2018-09-03 07:28:03.523086] I [master(worker /data/gluster-dist/distvol):1552:crawl] _GMaster: finished history crawl endtime=1535959677 stime=(1535701378, 0) entry_stime=(1535701378, 0)
[2018-09-03 07:28:04.62274] I [gsyncdstatus(worker /data/gluster-dist/distvol):249:set_worker_crawl_status] GeorepStatus: Crawl Status Change status=Changelog Crawl
[***@gluster-poc-noida distvol]#

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Monday, September 3, 2018 12:44 PM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
The log is not complete. If you are re-trying, could you please try it out on 4.1.3 and share the logs.
Thanks,
Kotresh HR

On Mon, Sep 3, 2018 at 12:42 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Please find the log files attached.

Request you to please have a look.

/Krishna



From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Monday, September 3, 2018 10:19 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
Indexing is the feature used by Hybrid crawl which only makes crawl faster. It has nothing to do with missing data sync.
Could you please share the complete log file of the session where the issue is encountered ?
Thanks,
Kotresh HR

On Mon, Sep 3, 2018 at 9:33 AM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh/Support,

Request your help to get it fix. My slave is not getting sync with master. When I restart the session after doing the indexing off then only it shows the file at slave but that is also blank with zero size.

At master: file size is 5.8 GB.

[***@gluster-poc-noida distvol]# du -sh 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
5.8G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[***@gluster-poc-noida distvol]#

But at slave, after doing the “indexing off” and restart the session and then wait for 2 days. It shows only 4.9 GB copied.

[***@gluster-poc-sj distvol]# du -sh 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
4.9G 17.10.v001.20171023-201021_17020_GPLV3.tar.gz
[***@gluster-poc-sj distvol]#

Similarly, I tested for small file of size 1.2 GB only that is still showing “0” size at slave after days waiting time.

At Master:

[***@gluster-poc-noida distvol]# du -sh rflowTestInt18.08-b001.t.Z
1.2G rflowTestInt18.08-b001.t.Z
[***@gluster-poc-noida distvol]#

At Slave:

[***@gluster-poc-sj distvol]# du -sh rflowTestInt18.08-b001.t.Z
0 rflowTestInt18.08-b001.t.Z
[***@gluster-poc-sj distvol]#

Below is my distributed volume info :

[***@gluster-poc-noida distvol]# gluster volume info glusterdist

Volume Name: glusterdist
Type: Distribute
Volume ID: af5b2915-7170-4b5e-aee8-7e68757b9bf1
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster-dist/distvol
Brick2: noi-poc-gluster:/data/gluster-dist/distvol
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
transport.address-family: inet
nfs.disable: on
[***@gluster-poc-noida distvol]#

Please help to fix, I believe its not a normal behavior of gluster rsync.

/Krishna
From: Krishna Verma
Sent: Friday, August 31, 2018 12:42 PM
To: 'Kotresh Hiremath Ravishankar' <***@redhat.com<mailto:***@redhat.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: RE: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

Hi Kotresh,

I have tested the geo replication over distributed volumes with 2*2 gluster setup.

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterdist gluster-poc-sj::glusterdist status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj Active Changelog Crawl 2018-08-31 10:28:19
noi-poc-gluster glusterdist /data/gluster-dist/distvol root gluster-poc-sj::glusterdist gluster-poc-sj2 Active History Crawl N/A
[***@gluster-poc-noida ~]#

Not at client I copied a 848MB file from local disk to master mounted volume and it took only 1 minute and 15 seconds. Its great
.

But even after waited for 2 hrs I was unable to see that file at slave site. Then I again erased the indexing by doing “gluster volume set glusterdist indexing off” and restart the session. Magically I received the file instantly at slave after doing this.

Why I need to do “indexing off” every time to reflect data at slave site? Is there any fix/workaround of it?

/Krishna


From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Friday, August 31, 2018 10:10 AM
To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 3:51 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Yes, this include the time take to write 1GB file to master. geo-rep was not stopped while the data was copying to master.

This way, you can't really measure how much time geo-rep took.


But now I am trouble, My putty session was timed out while copying data to master and geo replication was active. After I restart putty session My Master data is not syncing with slave. Its Last_synced time is 1hrs behind the current time.

I restart the geo rep and also delete and again create the session but its “LAST_SYNCED” time is same.

Unless, geo-rep is Faulty, it would be processing/syncing. You should check logs for any errors.


Please help in this.


. It's better if gluster volume has more distribute count like 3*3 or 4*3 :- Are you refereeing to create a distributed volume with 3 master node and 3 slave node?

Yes, that's correct. Please do the test with this. I recommend you to run the actual workload for which you are planning to use gluster instead of copying 1GB file and testing.



/krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Thursday, August 30, 2018 3:20 PM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 1:52 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

After fix the library link on node "noi-poc-gluster ", the status of one mater node is “Active” and another is “Passive”. Can I setup both the master as “Active” ?

Nope, since it's replica, it's redundant to sync same files from two nodes. Both replicas can't be Active.


Also, when I copy a 1GB size of file from gluster client to master gluster volume which is replicated with the slave volume, it tooks 35 minutes and 49 seconds. Is there any way to reduce its time taken to rsync data.

How did you measure this time? Does this include the time take for you to write 1GB file to master?
There are two aspects to consider while measuring this.

1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.

In your case, since the setup is 1*2 and only one geo-rep worker is Active, Step2 above equals to time for step1 + network transfer time.

You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master. It's one way.
2. Or stop geo-rep until the 1GB file is written to master and then start geo-rep to get actual geo-rep time.

To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like 3*3 or 4*3
It will help in two ways.
1. The files gets distributed on master to multiple bricks
2. So above will help geo-rep as files on multiple bricks are synced in parallel (multiple Actives)

NOTE: Gluster master server and one client is in Noida, India Location.
Gluster Slave server and one client is in USA.

Our approach is to transfer data from Noida gluster client will reach to the USA gluster client in a minimum time. Please suggest the best approach to achieve it.

[***@noi-dcops ~]# date ; rsync -avh --progress /tmp/gentoo_root.img /glusterfs/ ; date
Thu Aug 30 12:26:26 IST 2018
sending incremental file list
gentoo_root.img
1.07G 100% 490.70kB/s 0:35:36 (xfr#1, to-chk=0/1)

Is this I/O time to write to master volume?

sent 1.07G bytes received 35 bytes 499.65K bytes/sec
total size is 1.07G speedup is 1.00
Thu Aug 30 13:02:15 IST 2018
[***@noi-dcops ~]#



[***@gluster-poc-noida gluster]# gluster volume geo-replication status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root ssh://gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-30 13:42:18
noi-poc-gluster glusterep /data/gluster/gv0 root ssh://gluster-poc-sj::glusterep gluster-poc-sj Passive N/A N/A
[***@gluster-poc-noida gluster]#

Thanks in advance for your all time support.

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Thursday, August 30, 2018 10:51 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.
-Kotresh HR

On Thu, Aug 30, 2018 at 12:17 AM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Thank you so much for you input. Geo-replication is now showing “Active” atleast for 1 master node. But its still at faulty state for the 2nd master server.

Below is the detail.

[***@gluster-poc-noida glusterfs]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-29 23:56:06
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A


[***@gluster-poc-noida glusterfs]# gluster volume status
Status of volume: glusterep
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster-poc-noida:/data/gluster/gv0 49152 0 Y 22463
Brick noi-poc-gluster:/data/gluster/gv0 49152 0 Y 19471
Self-heal Daemon on localhost N/A N/A Y 32087
Self-heal Daemon on noi-poc-gluster N/A N/A Y 6272

Task Status of Volume glusterep
------------------------------------------------------------------------------
There are no active volume tasks



[***@gluster-poc-noida glusterfs]# gluster volume info

Volume Name: glusterep
Type: Replicate
Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster/gv0
Brick2: noi-poc-gluster:/data/gluster/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
[***@gluster-poc-noida glusterfs]#

Could you please help me in that also please?

It would be really a great help from your side.

/Krishna
From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Wednesday, August 29, 2018 10:47 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Answer inline

On Tue, Aug 28, 2018 at 4:28 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

I created the links before. Below is the detail.

[***@gluster-poc-noida ~]# ls -l /usr/lib64 | grep libgfch
lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1

The link created is pointing to wrong library. Please fix this

#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so

lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]# locate libgfchangelog.so
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]#

Is it looks good what we exactly need or di I need to create any more link or How to get “libgfchangelog.so” file if missing.

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:22 PM
To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which is what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"

It should be something like below.

#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so -> libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1

On Tue, Aug 28, 2018 at 4:04 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Thanks for the response, I did that also but nothing changed.

[***@gluster-poc-noida ~]# ldconfig /usr/lib64
[***@gluster-poc-noida ~]# ldconfig -p | grep libgfchangelog
libgfchangelog.so.0 (libc6,x86-64) => /usr/lib64/libgfchangelog.so.0
[***@gluster-poc-noida ~]#

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
[***@gluster-poc-noida ~]#

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:00 PM
To: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>
Cc: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog


On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar <***@redhat.com<mailto:***@redhat.com>> wrote:
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for geo-replication. But again same errors in the logs.
I have attaching the config files and logs here.
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Deleting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Creating geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not can you please apply that.
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0 /usr/lib64/libgfchangelog.so.
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the same error.
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] RepceClient: call failed call=26469:139794524604224:1535440781.44 method=init error=OSError
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
Gluster-***@gluster.org<mailto:Gluster-***@gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-users<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
Krishna Verma
2018-08-30 11:25:17 UTC
Permalink
Hi Kotresh,

1. Time to write 1GB to master : 27 minutes and 29 seconds
2. Time for geo-rep to transfer 1GB to slave. 8 minutes

/Krishna


From: Kotresh Hiremath Ravishankar <***@redhat.com>
Sent: Thursday, August 30, 2018 3:20 PM
To: Krishna Verma <***@cadence.com>
Cc: Sunny Kumar <***@redhat.com>; Gluster Users <gluster-***@gluster.org>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL


On Thu, Aug 30, 2018 at 1:52 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

After fix the library link on node "noi-poc-gluster ", the status of one mater node is “Active” and another is “Passive”. Can I setup both the master as “Active” ?

Nope, since it's replica, it's redundant to sync same files from two nodes. Both replicas can't be Active.


Also, when I copy a 1GB size of file from gluster client to master gluster volume which is replicated with the slave volume, it tooks 35 minutes and 49 seconds. Is there any way to reduce its time taken to rsync data.

How did you measure this time? Does this include the time take for you to write 1GB file to master?
There are two aspects to consider while measuring this.

1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.

In your case, since the setup is 1*2 and only one geo-rep worker is Active, Step2 above equals to time for step1 + network transfer time.

You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master. It's one way.
2. Or stop geo-rep until the 1GB file is written to master and then start geo-rep to get actual geo-rep time.

To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like 3*3 or 4*3
It will help in two ways.
1. The files gets distributed on master to multiple bricks
2. So above will help geo-rep as files on multiple bricks are synced in parallel (multiple Actives)

NOTE: Gluster master server and one client is in Noida, India Location.
Gluster Slave server and one client is in USA.

Our approach is to transfer data from Noida gluster client will reach to the USA gluster client in a minimum time. Please suggest the best approach to achieve it.

[***@noi-dcops ~]# date ; rsync -avh --progress /tmp/gentoo_root.img /glusterfs/ ; date
Thu Aug 30 12:26:26 IST 2018
sending incremental file list
gentoo_root.img
1.07G 100% 490.70kB/s 0:35:36 (xfr#1, to-chk=0/1)

Is this I/O time to write to master volume?

sent 1.07G bytes received 35 bytes 499.65K bytes/sec
total size is 1.07G speedup is 1.00
Thu Aug 30 13:02:15 IST 2018
[***@noi-dcops ~]#



[***@gluster-poc-noida gluster]# gluster volume geo-replication status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root ssh://gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-30 13:42:18
noi-poc-gluster glusterep /data/gluster/gv0 root ssh://gluster-poc-sj::glusterep gluster-poc-sj Passive N/A N/A
[***@gluster-poc-noida gluster]#

Thanks in advance for your all time support.

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Thursday, August 30, 2018 10:51 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.
-Kotresh HR

On Thu, Aug 30, 2018 at 12:17 AM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Thank you so much for you input. Geo-replication is now showing “Active” atleast for 1 master node. But its still at faulty state for the 2nd master server.

Below is the detail.

[***@gluster-poc-noida glusterfs]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl 2018-08-29 23:56:06
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A


[***@gluster-poc-noida glusterfs]# gluster volume status
Status of volume: glusterep
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster-poc-noida:/data/gluster/gv0 49152 0 Y 22463
Brick noi-poc-gluster:/data/gluster/gv0 49152 0 Y 19471
Self-heal Daemon on localhost N/A N/A Y 32087
Self-heal Daemon on noi-poc-gluster N/A N/A Y 6272

Task Status of Volume glusterep
------------------------------------------------------------------------------
There are no active volume tasks



[***@gluster-poc-noida glusterfs]# gluster volume info

Volume Name: glusterep
Type: Replicate
Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster-poc-noida:/data/gluster/gv0
Brick2: noi-poc-gluster:/data/gluster/gv0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
[***@gluster-poc-noida glusterfs]#

Could you please help me in that also please?

It would be really a great help from your side.

/Krishna
From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Wednesday, August 29, 2018 10:47 AM

To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Answer inline

On Tue, Aug 28, 2018 at 4:28 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

I created the links before. Below is the detail.

[***@gluster-poc-noida ~]# ls -l /usr/lib64 | grep libgfch
lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1

The link created is pointing to wrong library. Please fix this

#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so

lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]# locate libgfchangelog.so
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
[***@gluster-poc-noida ~]#

Is it looks good what we exactly need or di I need to create any more link or How to get “libgfchangelog.so” file if missing.

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:22 PM
To: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>
Cc: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which is what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"

It should be something like below.

#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so -> libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1

On Tue, Aug 28, 2018 at 4:04 PM, Krishna Verma <***@cadence.com<mailto:***@cadence.com>> wrote:
Hi Kotresh,

Thanks for the response, I did that also but nothing changed.

[***@gluster-poc-noida ~]# ldconfig /usr/lib64
[***@gluster-poc-noida ~]# ldconfig -p | grep libgfchangelog
libgfchangelog.so.0 (libc6,x86-64) => /usr/lib64/libgfchangelog.so.0
[***@gluster-poc-noida ~]#

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful

[***@gluster-poc-noida ~]# gluster volume geo-replication glusterep gluster-poc-sj::glusterep status

MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
[***@gluster-poc-noida ~]#

/Krishna

From: Kotresh Hiremath Ravishankar <***@redhat.com<mailto:***@redhat.com>>
Sent: Tuesday, August 28, 2018 4:00 PM
To: Sunny Kumar <***@redhat.com<mailto:***@redhat.com>>
Cc: Krishna Verma <***@cadence.com<mailto:***@cadence.com>>; Gluster Users <gluster-***@gluster.org<mailto:gluster-***@gluster.org>>

Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work

EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog


On Tue, Aug 28, 2018 at 3:52 PM, Sunny Kumar <***@redhat.com<mailto:***@redhat.com>> wrote:
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for geo-replication. But again same errors in the logs.
I have attaching the config files and logs here.
Stopping geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Deleting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
Creating geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
Starting geo-replication session between glusterep & gluster-poc-sj::glusterep has been successful
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root gluster-poc-sj::glusterep N/A Faulty N/A N/A
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not can you please apply that.
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0 /usr/lib64/libgfchangelog.so.
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the same error.
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker /data/gluster/gv0):197:__call__] RepceClient: call failed call=26469:139794524604224:1535440781.44 method=init error=OSError
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent /data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching EOF.
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
Gluster-***@gluster.org<mailto:Gluster-***@gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-users<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
Kotresh Hiremath Ravishankar
2018-08-31 04:42:05 UTC
Permalink
Post by Krishna Verma
Hi Kotresh,
1. Time to write 1GB to master : 27 minutes and 29 seconds
2. Time for geo-rep to transfer 1GB to slave. 8 minutes
This is hard to believe, considering there is no
distribution and there is only one brick participating in syncing.
Could you retest and confirm.
Post by Krishna Verma
/Krishna
*Sent:* Thursday, August 30, 2018 3:20 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Kotresh,
After fix the library link on node "noi-poc-gluster ", the status of one
mater node is “Active” and another is “Passive”. Can I setup both the
master as “Active” ?
Nope, since it's replica, it's redundant to sync same files from two
nodes. Both replicas can't be Active.
Also, when I copy a 1GB size of file from gluster client to master gluster
volume which is replicated with the slave volume, it tooks 35 minutes and
49 seconds. Is there any way to reduce its time taken to rsync data.
How did you measure this time? Does this include the time take for you to
write 1GB file to master?
There are two aspects to consider while measuring this.
1. Time to write 1GB to master
2. Time for geo-rep to transfer 1GB to slave.
In your case, since the setup is 1*2 and only one geo-rep worker is
Active, Step2 above equals to time for step1 + network transfer time.
You can measure time in two scenarios
1. If geo-rep is started while the data is still being written to master. It's one way.
2. Or stop geo-rep until the 1GB file is written to master and then start
geo-rep to get actual geo-rep time.
To improve replicating speed,
1. You can play around with rsync options depending on the kind of I/O
and configure the same for geo-rep as it also uses rsync internally.
2. It's better if gluster volume has more distribute count like 3*3 or 4*3
It will help in two ways.
1. The files gets distributed on master to multiple bricks
2. So above will help geo-rep as files on multiple bricks are
synced in parallel (multiple Actives)
NOTE: Gluster master server and one client is in Noida, India Location.
Gluster Slave server and one client is in USA.
Our approach is to transfer data from Noida gluster client will reach to
the USA gluster client in a minimum time. Please suggest the best approach
to achieve it.
Thu Aug 30 12:26:26 IST 2018
sending incremental file list
gentoo_root.img
1.07G 100% 490.70kB/s 0:35:36 (xfr#1, to-chk=0/1)
Is this I/O time to write to master volume?
sent 1.07G bytes received 35 bytes 499.65K bytes/sec
total size is 1.07G speedup is 1.00
Thu Aug 30 13:02:15 IST 2018
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL
STATUS LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
---------------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
ssh://gluster-poc-sj::glusterep gluster-poc-sj Active Changelog
Crawl 2018-08-30 13:42:18
noi-poc-gluster glusterep /data/gluster/gv0 root
ssh://gluster-poc-sj::glusterep gluster-poc-sj Passive
N/A N/A
Thanks in advance for your all time support.
/Krishna
*Sent:* Thursday, August 30, 2018 10:51 AM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Did you fix the library link on node "noi-poc-gluster " as well?
If not please fix it. Please share the geo-rep log this node if it's
as different issue.
-Kotresh HR
Hi Kotresh,
Thank you so much for you input. Geo-replication is now showing “Active”
atleast for 1 master node. But its still at faulty state for the 2nd
master server.
Below is the detail.
glusterep gluster-poc-sj::glusterep status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
--------------------------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep gluster-poc-sj Active Changelog Crawl
2018-08-29 23:56:06
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A
N/A
Status of volume: glusterep
Gluster process TCP Port RDMA Port Online
Pid
------------------------------------------------------------
------------------
Brick gluster-poc-noida:/data/gluster/gv0 49152 0 Y
22463
Brick noi-poc-gluster:/data/gluster/gv0 49152 0 Y
19471
Self-heal Daemon on localhost N/A N/A Y
32087
Self-heal Daemon on noi-poc-gluster N/A N/A Y
6272
Task Status of Volume glusterep
------------------------------------------------------------
------------------
There are no active volume tasks
Volume Name: glusterep
Type: Replicate
Volume ID: 4a71bc94-14ce-4b2c-abc4-e6a9a9765161
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: gluster-poc-noida:/data/gluster/gv0
Brick2: noi-poc-gluster:/data/gluster/gv0
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
Could you please help me in that also please?
It would be really a great help from your side.
/Krishna
*Sent:* Wednesday, August 29, 2018 10:47 AM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Answer inline
Hi Kotresh,
I created the links before. Below is the detail.
lrwxrwxrwx 1 root root 30 Aug 28 14:59 libgfchangelog.so ->
/usr/lib64/libgfchangelog.so.1
The link created is pointing to wrong library. Please fix this
#cd /usr/lib64
#rm libgfchangelog.so
#ln -s "libgfchangelog.so.0.0.1" libgfchangelog.so
lrwxrwxrwx 1 root root 23 Aug 23 23:35 libgfchangelog.so.0 ->
libgfchangelog.so.0.0.1
-rwxr-xr-x 1 root root 63384 Jul 24 19:11 libgfchangelog.so.0.0.1
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
Is it looks good what we exactly need or di I need to create any more link
or How to get “libgfchangelog.so” file if missing.
/Krishna
*Sent:* Tuesday, August 28, 2018 4:22 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
As per the output shared, I don't see the file "libgfchangelog.so" which
is what is required.
I only see "libgfchangelog.so.0". Please confirm "libgfchangelog.so" is
present in "/usr/lib64/".
If not create a symlink similar to "libgfchangelog.so.0"
It should be something like below.
#ls -l /usr/lib64 | grep libgfch
-rwxr-xr-x. 1 root root 1078 Aug 28 05:56 libgfchangelog.la
<https://urldefense.proofpoint.com/v2/url?u=http-3A__libgfchangelog.la&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=77GIqpHy9HY8RQd6lKzSJ-Z1PCuIhZJ3I3IvIuDX-xo&s=kIFnrBaSFV_DdqZezd6PXcDnD8Iy_gVN69ETZYtykEE&e=>
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so -> libgfchangelog.so.0.0.1
lrwxrwxrwx. 1 root root 23 Aug 28 05:56 libgfchangelog.so.0 -> libgfchangelog.so.0.0.1
-rwxr-xr-x. 1 root root 336888 Aug 28 05:56 libgfchangelog.so.0.0.1
Hi Kotresh,
Thanks for the response, I did that also but nothing changed.
libgfchangelog.so.0 (libc6,x86-64) =>
/usr/lib64/libgfchangelog.so.0
gluster-poc-sj::glusterep stop
Stopping geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep start
Starting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
------------------------------------------------------------
------------------------------------------------------------
-----------------------------
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty
N/A N/A
/Krishna
*Sent:* Tuesday, August 28, 2018 4:00 PM
*Subject:* Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
EXTERNAL MAIL
Hi Krishna,
Since your libraries are in /usr/lib64, you should be doing
#ldconfig /usr/lib64
Confirm that below command lists the library
#ldconfig -p | grep libgfchangelog
can you do ldconfig /usr/local/lib and share the output of ldconfig -p
/usr/local/lib | grep libgf
Post by Krishna Verma
Hi Sunny,
I did the mentioned changes given in patch and restart the session for
geo-replication. But again same errors in the logs.
Post by Krishna Verma
I have attaching the config files and logs here.
gluster-poc-sj::glusterep stop
Post by Krishna Verma
Stopping geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep delete
Post by Krishna Verma
Deleting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep create push-pem force
Post by Krishna Verma
Creating geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep start
Post by Krishna Verma
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
gluster-poc-sj::glusterep start
Post by Krishna Verma
geo-replication start failed for glusterep gluster-poc-sj::glusterep
geo-replication command failed
syncdaemon/repce.py
gluster-poc-sj::glusterep start
Post by Krishna Verma
Starting geo-replication session between glusterep &
gluster-poc-sj::glusterep has been successful
gluster-poc-sj::glusterep status
Post by Krishna Verma
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED
Post by Krishna Verma
------------------------------------------------------------
------------------------------------------------------------
-----------------------------
Post by Krishna Verma
gluster-poc-noida glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
Post by Krishna Verma
noi-poc-gluster glusterep /data/gluster/gv0 root
gluster-poc-sj::glusterep N/A Faulty N/A N/A
Post by Krishna Verma
/Krishna.
-----Original Message-----
Sent: Tuesday, August 28, 2018 3:17 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not
work
Post by Krishna Verma
EXTERNAL MAIL
With same log message ?
Can you please verify that
https://urldefense.proofpoint.com/v2/url?u=https-3A__review.
gluster.org_-23_c_glusterfs_-2B_20207_&d=DwIBaQ&c=
aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_
6ZItAWQ2jH8rVLG6tiXhoLFE&m=F0ExtFUfa_YCktOGvy82x3IAxvi2GrbPR72jZ8beuYk&s=
fGtkmezHJj5YoLN3dUeVUCcYFnREHyOSk36mRjbTTEQ&e= patch is present if not
can you please apply that.
Post by Krishna Verma
and try with symlinking ln -s /usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.
Post by Krishna Verma
Please share the log also.
Regards,
Sunny
Post by Krishna Verma
Hi Sunny,
Thanks for your response, I tried both, but still I am getting the
same error.
Post by Krishna Verma
Post by Krishna Verma
/usr/lib64/libgfchangelog.so lrwxrwxrwx. 1 root root 30 Aug 28 14:59
/usr/lib64/libgfchangelog.so -> /usr/lib64/libgfchangelog.so.1
/Krishna
-----Original Message-----
Sent: Tuesday, August 28, 2018 2:55 PM
Subject: Re: [Gluster-users] Upgrade to 4.1.2 geo-replication does not work
EXTERNAL MAIL
Hi Krish,
You can run -
#ldconfig /usr/lib
If that still does not solves your problem you can do manual symlink
like - ln -s /usr/lib64/libgfchangelog.so.1
/usr/lib64/libgfchangelog.so
Thanks,
Sunny Kumar
Post by Krishna Verma
Hi
I am getting below error in gsyncd.log
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.446785] E [repce(worker
/data/gluster/gv0):197:__call__] RepceClient: call failed
call=26469:139794524604224:1535440781.44 method=init
error=OSError
Post by Krishna Verma
Post by Krishna Verma
Post by Krishna Verma
[2018-08-28 07:19:41.447041] E [syncdutils(worker
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
72, in subcmd_worker
local.service_loop(remote)
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line
1236, in service_loop
changelog_agent.init()
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
216, in __call__
return self.ins(self.meth, *a)
File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line
198, in __call__
raise res
OSError: libgfchangelog.so: cannot open shared object file: No such
file or directory
[2018-08-28 07:19:41.457555] I [repce(agent
/data/gluster/gv0):80:service_loop] RepceServer: terminating on reaching
EOF.
Post by Krishna Verma
Post by Krishna Verma
Post by Krishna Verma
[2018-08-28 07:19:42.440184] I [monitor(monitor):272:monitor]
worker died in startup phase brick=/data/gluster/gv0
/usr/lib64/libgfchangelog.so.0
/usr/lib64/libgfchangelog.so.0.0.1
What I can do to fix it ?
/Krish
_______________________________________________
Gluster-users mailing list
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.o
rg
_mailman_listinfo_gluster-2Dusers&d=DwIBaQ&c=aUq983L2pue2FqKFoP6PGHM
JQ
yoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=_
u6
vGRjlVsype7Z8hXDgCONilqVe4sIWkXNqqz2n3IQ&s=i0EUwtUHurhJHyw9UPpepCdLB
70
1mkxoNZWYvU7XXug&e=
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.gluster.org_mailman_listinfo_gluster-2Dusers&d=DwMFaQ&c=aUq983L2pue2FqKFoP6PGHMJQyoJ7kl3s3GZ-_haXqY&r=0E5nRoxLsT2ZXgCpJM_6ZItAWQ2jH8rVLG6tiXhoLFE&m=BuUVeXUwqxdQDoP-IWDI_JPFCQGrRFdzV9g0enLn8kM&s=X0eVYgZ1emUQnEwRTg0AGbAWyoIwIyrE_gzmuolgiPE&e=>
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
--
Thanks and Regards,
Kotresh H R
Loading...