Discussion:
[Gluster-users] client glusterfs connection problem
Oğuz Yarımtepe
2018-10-25 19:39:59 UTC
Permalink
I have 4 node GlusterFS cluster. Used Centos SIG 4.1 repo.

# gluster peer status
Number of Peers: 3

Hostname: aslrplpgls02
Uuid: 0876151a-058e-42ec-91f2-f25f353a0207
State: Peer in Cluster (Connected)

Hostname: bslrplpgls01
Uuid: 6d73ed2a-2287-4872-9a8f-64d6e833181f
State: Peer in Cluster (Connected)

Hostname: bslrplpgls02
Uuid: 8ab6b61f-f502-44c7-8966-2ab03a6b9f7e
State: Peer in Cluster (Connected)

# gluster volume status vol0
Status of volume: vol0
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick aslrplpgls01:/bricks/brick1/vol0 49152 0 Y
12991
Brick aslrplpgls02:/bricks/brick2/vol0 49152 0 Y
9344
Brick bslrplpgls01:/bricks/brick3/vol0 49152 0 Y
61662
Brick bslrplpgls02:/bricks/brick4/vol0 49152 0 Y
61843
Self-heal Daemon on localhost N/A N/A Y
13014
Self-heal Daemon on bslrplpgls02 N/A N/A Y
61866
Self-heal Daemon on bslrplpgls01 N/A N/A Y
61685
Self-heal Daemon on aslrplpgls02 N/A N/A Y
9367

Task Status of Volume vol0
------------------------------------------------------------------------------
There are no active volume tasks

This is how volume area is mounted:

/dev/gluster_vg/gluster_lv /bricks/brick1 xfs defaults 1 2

When i try to mount vol0 on a remote machine below is what i got:

[2018-10-25 19:37:23.033302] D [MSGID: 0]
0x7f0d04001038, vol0-write-behind returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033329] D [MSGID: 0] [io-cache.c:268:ioc_lookup_cbk]
0-stack-trace: stack-address: 0x7f0d04001038, vol0-io-cache returned -1
error: Transport endpoint is not connected [Transport endpoint is not
connected]
[2018-10-25 19:37:23.033356] D [MSGID: 0] [quick-read.c:473:qr_lookup_cbk]
0-stack-trace: stack-address: 0x7f0d04001038, vol0-quick-read returned -1
error: Transport endpoint is not connected [Transport endpoint is not
connected]
[2018-10-25 19:37:23.033373] D [MSGID: 0] [md-cache.c:1130:mdc_lookup_cbk]
0-stack-trace: stack-address: 0x7f0d04001038, vol0-md-cache returned -1
error: Transport endpoint is not connected [Transport endpoint is not
connected]
[2018-10-25 19:37:23.033389] D [MSGID: 0]
0x7f0d04001038, vol0 returned -1 error: Transport endpoint is not connected
[Transport endpoint is not connected]
[2018-10-25 19:37:23.033408] W [fuse-resolve.c:132:fuse_resolve_gfid_cbk]
0-fuse: 00000000-0000-0000-0000-000000000001: failed to resolve (Transport
endpoint is not connected)
[2018-10-25 19:37:23.033426] E [fuse-bridge.c:928:fuse_getattr_resume]
0-glusterfs-fuse: 2: GETATTR 1 (00000000-0000-0000-0000-000000000001)
resolution failed
[2018-10-25 19:37:23.036511] D [MSGID: 0] [dht-common.c:3468:dht_lookup]
0-vol0-dht: Calling fresh lookup for / on vol0-replicate-0
[2018-10-25 19:37:23.037347] D [MSGID: 0]
0x7f0d04001038, vol0-replicate-0 returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.037375] D [MSGID: 0]
[dht-common.c:3020:dht_lookup_cbk] 0-vol0-dht: fresh_lookup returned for /
with op_ret -1 [Transport endpoint is not connected]
[2018-10-25 19:37:23.037940] D [MSGID: 0]
0x7f0d04001038, vol0-replicate-0 returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.037963] D [MSGID: 0]
[dht-common.c:1378:dht_lookup_dir_cbk] 0-vol0-dht: lookup of / on
vol0-replicate-0 returned error [Transport endpoint is not connected]
[2018-10-25 19:37:23.037979] E [MSGID: 101046]
[dht-common.c:1502:dht_lookup_dir_cbk] 0-vol0-dht: dict is null
[2018-10-25 19:37:23.037994] D [MSGID: 0]
0x7f0d04001038, vol0-dht returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038010] D [MSGID: 0]
0x7f0d04001038, vol0-write-behind returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038028] D [MSGID: 0] [io-cache.c:268:ioc_lookup_cbk]
0-stack-trace: stack-address: 0x7f0d04001038, vol0-io-cache returned -1
error: Transport endpoint is not connected [Transport endpoint is not
connected]
[2018-10-25 19:37:23.038045] D [MSGID: 0] [quick-read.c:473:qr_lookup_cbk]
0-stack-trace: stack-address: 0x7f0d04001038, vol0-quick-read returned -1
error: Transport endpoint is not connected [Transport endpoint is not
connected]
[2018-10-25 19:37:23.038061] D [MSGID: 0] [md-cache.c:1130:mdc_lookup_cbk]
0-stack-trace: stack-address: 0x7f0d04001038, vol0-md-cache returned -1
error: Transport endpoint is not connected [Transport endpoint is not
connected]
[2018-10-25 19:37:23.038078] D [MSGID: 0]
0x7f0d04001038, vol0 returned -1 error: Transport endpoint is not connected
[Transport endpoint is not connected]
[2018-10-25 19:37:23.038096] W [fuse-resolve.c:132:fuse_resolve_gfid_cbk]
0-fuse: 00000000-0000-0000-0000-000000000001: failed to resolve (Transport
endpoint is not connected)
[2018-10-25 19:37:23.038110] E [fuse-bridge.c:928:fuse_getattr_resume]
0-glusterfs-fuse: 3: GETATTR 1 (00000000-0000-0000-0000-000000000001)
resolution failed
[2018-10-25 19:37:23.041169] D [fuse-bridge.c:5087:fuse_thread_proc]
0-glusterfs-fuse: terminating upon getting ENODEV when reading /dev/fuse
[2018-10-25 19:37:23.041196] I [fuse-bridge.c:5199:fuse_thread_proc]
0-fuse: initating unmount of /mnt/gluster
[2018-10-25 19:37:23.041306] D [logging.c:1795:gf_log_flush_extra_msgs]
0-logging-infra: Log buffer size reduced. About to flush 5 extra log
messages
[2018-10-25 19:37:23.041331] D [logging.c:1798:gf_log_flush_extra_msgs]
0-logging-infra: Just flushed 5 extra log messages
[2018-10-25 19:37:23.041398] W [glusterfsd.c:1514:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7e25) [0x7f0d24e0ae25]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x5594b73edd65]
received signum (15), shutting down
[2018-10-25 19:37:23.041417] D [mgmt-pmap.c:79:rpc_clnt_mgmt_pmap_signout]
0-fsd-mgmt: portmapper signout arguments not given
Unmounting '/mnt/gluster'.
[2018-10-25 19:37:23.041441] I [fuse-bridge.c:5986:fini] 0-fuse: Closing
fuse connection to '/mnt/gluster'.
This is how i added mount point to fstab

10.35.72.138:/vol0 /mnt/gluster glusterfs defaults,_netdev,log-level=DEBUG
0 0

Any idea what the problem is? I found some bug entries, not sure whether
this situation is a bug.
--
Oğuz Yarımtepe
http://about.me/oguzy
Oğuz Yarımtepe
2018-10-25 19:52:51 UTC
Permalink
One more addition:

# gluster volume info


Volume Name: vol0
Type: Replicate
Volume ID: 28384e2b-ea7e-407e-83ae-4d4e69a2cc7e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: aslrplpgls01:/bricks/brick1/vol0
Brick2: aslrplpgls02:/bricks/brick2/vol0
Brick3: bslrplpgls01:/bricks/brick3/vol0
Brick4: bslrplpgls02:/bricks/brick4/vol0
Options Reconfigured:
cluster.self-heal-daemon: enable
cluster.halo-enabled: True
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
Post by Oğuz Yarımtepe
I have 4 node GlusterFS cluster. Used Centos SIG 4.1 repo.
# gluster peer status
Number of Peers: 3
Hostname: aslrplpgls02
Uuid: 0876151a-058e-42ec-91f2-f25f353a0207
State: Peer in Cluster (Connected)
Hostname: bslrplpgls01
Uuid: 6d73ed2a-2287-4872-9a8f-64d6e833181f
State: Peer in Cluster (Connected)
Hostname: bslrplpgls02
Uuid: 8ab6b61f-f502-44c7-8966-2ab03a6b9f7e
State: Peer in Cluster (Connected)
# gluster volume status vol0
Status of volume: vol0
Gluster process TCP Port RDMA Port Online
Pid
------------------------------------------------------------------------------
Brick aslrplpgls01:/bricks/brick1/vol0 49152 0 Y
12991
Brick aslrplpgls02:/bricks/brick2/vol0 49152 0 Y
9344
Brick bslrplpgls01:/bricks/brick3/vol0 49152 0 Y
61662
Brick bslrplpgls02:/bricks/brick4/vol0 49152 0 Y
61843
Self-heal Daemon on localhost N/A N/A Y
13014
Self-heal Daemon on bslrplpgls02 N/A N/A Y
61866
Self-heal Daemon on bslrplpgls01 N/A N/A Y
61685
Self-heal Daemon on aslrplpgls02 N/A N/A Y
9367
Task Status of Volume vol0
------------------------------------------------------------------------------
There are no active volume tasks
/dev/gluster_vg/gluster_lv /bricks/brick1 xfs defaults 1 2
[2018-10-25 19:37:23.033302] D [MSGID: 0]
0x7f0d04001038, vol0-write-behind returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033329] D [MSGID: 0] [io-cache.c:268:ioc_lookup_cbk]
0-stack-trace: stack-address: 0x7f0d04001038, vol0-io-cache returned -1
error: Transport endpoint is not connected [Transport endpoint is not
connected]
[2018-10-25 19:37:23.033356] D [MSGID: 0]
0x7f0d04001038, vol0-quick-read returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033373] D [MSGID: 0]
0x7f0d04001038, vol0-md-cache returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033389] D [MSGID: 0]
0x7f0d04001038, vol0 returned -1 error: Transport endpoint is not connected
[Transport endpoint is not connected]
[2018-10-25 19:37:23.033408] W [fuse-resolve.c:132:fuse_resolve_gfid_cbk]
0-fuse: 00000000-0000-0000-0000-000000000001: failed to resolve (Transport
endpoint is not connected)
[2018-10-25 19:37:23.033426] E [fuse-bridge.c:928:fuse_getattr_resume]
0-glusterfs-fuse: 2: GETATTR 1 (00000000-0000-0000-0000-000000000001)
resolution failed
[2018-10-25 19:37:23.036511] D [MSGID: 0] [dht-common.c:3468:dht_lookup]
0-vol0-dht: Calling fresh lookup for / on vol0-replicate-0
[2018-10-25 19:37:23.037347] D [MSGID: 0]
0x7f0d04001038, vol0-replicate-0 returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.037375] D [MSGID: 0]
[dht-common.c:3020:dht_lookup_cbk] 0-vol0-dht: fresh_lookup returned for /
with op_ret -1 [Transport endpoint is not connected]
[2018-10-25 19:37:23.037940] D [MSGID: 0]
0x7f0d04001038, vol0-replicate-0 returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.037963] D [MSGID: 0]
[dht-common.c:1378:dht_lookup_dir_cbk] 0-vol0-dht: lookup of / on
vol0-replicate-0 returned error [Transport endpoint is not connected]
[2018-10-25 19:37:23.037979] E [MSGID: 101046]
[dht-common.c:1502:dht_lookup_dir_cbk] 0-vol0-dht: dict is null
[2018-10-25 19:37:23.037994] D [MSGID: 0]
0x7f0d04001038, vol0-dht returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038010] D [MSGID: 0]
0x7f0d04001038, vol0-write-behind returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038028] D [MSGID: 0] [io-cache.c:268:ioc_lookup_cbk]
0-stack-trace: stack-address: 0x7f0d04001038, vol0-io-cache returned -1
error: Transport endpoint is not connected [Transport endpoint is not
connected]
[2018-10-25 19:37:23.038045] D [MSGID: 0]
0x7f0d04001038, vol0-quick-read returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038061] D [MSGID: 0]
0x7f0d04001038, vol0-md-cache returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038078] D [MSGID: 0]
0x7f0d04001038, vol0 returned -1 error: Transport endpoint is not connected
[Transport endpoint is not connected]
[2018-10-25 19:37:23.038096] W [fuse-resolve.c:132:fuse_resolve_gfid_cbk]
0-fuse: 00000000-0000-0000-0000-000000000001: failed to resolve (Transport
endpoint is not connected)
[2018-10-25 19:37:23.038110] E [fuse-bridge.c:928:fuse_getattr_resume]
0-glusterfs-fuse: 3: GETATTR 1 (00000000-0000-0000-0000-000000000001)
resolution failed
[2018-10-25 19:37:23.041169] D [fuse-bridge.c:5087:fuse_thread_proc]
0-glusterfs-fuse: terminating upon getting ENODEV when reading /dev/fuse
[2018-10-25 19:37:23.041196] I [fuse-bridge.c:5199:fuse_thread_proc]
0-fuse: initating unmount of /mnt/gluster
[2018-10-25 19:37:23.041306] D [logging.c:1795:gf_log_flush_extra_msgs]
0-logging-infra: Log buffer size reduced. About to flush 5 extra log
messages
[2018-10-25 19:37:23.041331] D [logging.c:1798:gf_log_flush_extra_msgs]
0-logging-infra: Just flushed 5 extra log messages
[2018-10-25 19:37:23.041398] W [glusterfsd.c:1514:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7e25) [0x7f0d24e0ae25]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x5594b73edd65]
received signum (15), shutting down
[2018-10-25 19:37:23.041417] D
[mgmt-pmap.c:79:rpc_clnt_mgmt_pmap_signout] 0-fsd-mgmt: portmapper signout
arguments not given
Unmounting '/mnt/gluster'.
[2018-10-25 19:37:23.041441] I [fuse-bridge.c:5986:fini] 0-fuse: Closing
fuse connection to '/mnt/gluster'.
This is how i added mount point to fstab
10.35.72.138:/vol0 /mnt/gluster glusterfs
defaults,_netdev,log-level=DEBUG 0 0
Any idea what the problem is? I found some bug entries, not sure whether
this situation is a bug.
--
Oğuz Yarımtepe
http://about.me/oguzy
--
Oğuz Yarımtepe
http://about.me/oguzy
Poornima Gurusiddaiah
2018-10-26 01:44:19 UTC
Permalink
Is this a new volume? Has it never been mounted successfully? If so try
changing firewall settings to allow gluster ports, also check for selinux
settings.

Regards,
Poornima
Post by Oğuz Yarımtepe
# gluster volume info
Volume Name: vol0
Type: Replicate
Volume ID: 28384e2b-ea7e-407e-83ae-4d4e69a2cc7e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Brick1: aslrplpgls01:/bricks/brick1/vol0
Brick2: aslrplpgls02:/bricks/brick2/vol0
Brick3: bslrplpgls01:/bricks/brick3/vol0
Brick4: bslrplpgls02:/bricks/brick4/vol0
cluster.self-heal-daemon: enable
cluster.halo-enabled: True
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
Post by Oğuz Yarımtepe
I have 4 node GlusterFS cluster. Used Centos SIG 4.1 repo.
# gluster peer status
Number of Peers: 3
Hostname: aslrplpgls02
Uuid: 0876151a-058e-42ec-91f2-f25f353a0207
State: Peer in Cluster (Connected)
Hostname: bslrplpgls01
Uuid: 6d73ed2a-2287-4872-9a8f-64d6e833181f
State: Peer in Cluster (Connected)
Hostname: bslrplpgls02
Uuid: 8ab6b61f-f502-44c7-8966-2ab03a6b9f7e
State: Peer in Cluster (Connected)
# gluster volume status vol0
Status of volume: vol0
Gluster process TCP Port RDMA Port Online
Pid
------------------------------------------------------------------------------
Brick aslrplpgls01:/bricks/brick1/vol0 49152 0 Y
12991
Brick aslrplpgls02:/bricks/brick2/vol0 49152 0 Y
9344
Brick bslrplpgls01:/bricks/brick3/vol0 49152 0 Y
61662
Brick bslrplpgls02:/bricks/brick4/vol0 49152 0 Y
61843
Self-heal Daemon on localhost N/A N/A Y
13014
Self-heal Daemon on bslrplpgls02 N/A N/A Y
61866
Self-heal Daemon on bslrplpgls01 N/A N/A Y
61685
Self-heal Daemon on aslrplpgls02 N/A N/A Y
9367
Task Status of Volume vol0
------------------------------------------------------------------------------
There are no active volume tasks
/dev/gluster_vg/gluster_lv /bricks/brick1 xfs defaults 1 2
[2018-10-25 19:37:23.033302] D [MSGID: 0]
0x7f0d04001038, vol0-write-behind returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033329] D [MSGID: 0]
0x7f0d04001038, vol0-io-cache returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033356] D [MSGID: 0]
0x7f0d04001038, vol0-quick-read returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033373] D [MSGID: 0]
0x7f0d04001038, vol0-md-cache returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033389] D [MSGID: 0]
0x7f0d04001038, vol0 returned -1 error: Transport endpoint is not connected
[Transport endpoint is not connected]
[2018-10-25 19:37:23.033408] W
00000000-0000-0000-0000-000000000001: failed to resolve (Transport endpoint
is not connected)
[2018-10-25 19:37:23.033426] E [fuse-bridge.c:928:fuse_getattr_resume]
0-glusterfs-fuse: 2: GETATTR 1 (00000000-0000-0000-0000-000000000001)
resolution failed
[2018-10-25 19:37:23.036511] D [MSGID: 0] [dht-common.c:3468:dht_lookup]
0-vol0-dht: Calling fresh lookup for / on vol0-replicate-0
[2018-10-25 19:37:23.037347] D [MSGID: 0]
0x7f0d04001038, vol0-replicate-0 returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.037375] D [MSGID: 0]
[dht-common.c:3020:dht_lookup_cbk] 0-vol0-dht: fresh_lookup returned for /
with op_ret -1 [Transport endpoint is not connected]
[2018-10-25 19:37:23.037940] D [MSGID: 0]
0x7f0d04001038, vol0-replicate-0 returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.037963] D [MSGID: 0]
[dht-common.c:1378:dht_lookup_dir_cbk] 0-vol0-dht: lookup of / on
vol0-replicate-0 returned error [Transport endpoint is not connected]
[2018-10-25 19:37:23.037979] E [MSGID: 101046]
[dht-common.c:1502:dht_lookup_dir_cbk] 0-vol0-dht: dict is null
[2018-10-25 19:37:23.037994] D [MSGID: 0]
0x7f0d04001038, vol0-dht returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038010] D [MSGID: 0]
0x7f0d04001038, vol0-write-behind returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038028] D [MSGID: 0]
0x7f0d04001038, vol0-io-cache returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038045] D [MSGID: 0]
0x7f0d04001038, vol0-quick-read returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038061] D [MSGID: 0]
0x7f0d04001038, vol0-md-cache returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038078] D [MSGID: 0]
0x7f0d04001038, vol0 returned -1 error: Transport endpoint is not connected
[Transport endpoint is not connected]
[2018-10-25 19:37:23.038096] W
00000000-0000-0000-0000-000000000001: failed to resolve (Transport endpoint
is not connected)
[2018-10-25 19:37:23.038110] E [fuse-bridge.c:928:fuse_getattr_resume]
0-glusterfs-fuse: 3: GETATTR 1 (00000000-0000-0000-0000-000000000001)
resolution failed
[2018-10-25 19:37:23.041169] D [fuse-bridge.c:5087:fuse_thread_proc]
0-glusterfs-fuse: terminating upon getting ENODEV when reading /dev/fuse
[2018-10-25 19:37:23.041196] I [fuse-bridge.c:5199:fuse_thread_proc]
0-fuse: initating unmount of /mnt/gluster
[2018-10-25 19:37:23.041306] D [logging.c:1795:gf_log_flush_extra_msgs]
0-logging-infra: Log buffer size reduced. About to flush 5 extra log
messages
[2018-10-25 19:37:23.041331] D [logging.c:1798:gf_log_flush_extra_msgs]
0-logging-infra: Just flushed 5 extra log messages
[2018-10-25 19:37:23.041398] W [glusterfsd.c:1514:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7e25) [0x7f0d24e0ae25]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x5594b73edd65]
received signum (15), shutting down
[2018-10-25 19:37:23.041417] D
[mgmt-pmap.c:79:rpc_clnt_mgmt_pmap_signout] 0-fsd-mgmt: portmapper signout
arguments not given
Unmounting '/mnt/gluster'.
[2018-10-25 19:37:23.041441] I [fuse-bridge.c:5986:fini] 0-fuse: Closing
fuse connection to '/mnt/gluster'.
This is how i added mount point to fstab
10.35.72.138:/vol0 /mnt/gluster glusterfs
defaults,_netdev,log-level=DEBUG 0 0
Any idea what the problem is? I found some bug entries, not sure whether
this situation is a bug.
--
Oğuz Yarımtepe
http://about.me/oguzy
--
Oğuz Yarımtepe
http://about.me/oguzy
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
Oğuz Yarımtepe
2018-10-26 04:39:41 UTC
Permalink
Yes new volume. Selinux is disabled and no firewall.


26 Eki 2018 Cum, saat 04:44 tarihinde Poornima Gurusiddaiah <
Post by Poornima Gurusiddaiah
Is this a new volume? Has it never been mounted successfully? If so try
changing firewall settings to allow gluster ports, also check for selinux
settings.
Regards,
Poornima
Post by Oğuz Yarımtepe
# gluster volume info
Volume Name: vol0
Type: Replicate
Volume ID: 28384e2b-ea7e-407e-83ae-4d4e69a2cc7e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Brick1: aslrplpgls01:/bricks/brick1/vol0
Brick2: aslrplpgls02:/bricks/brick2/vol0
Brick3: bslrplpgls01:/bricks/brick3/vol0
Brick4: bslrplpgls02:/bricks/brick4/vol0
cluster.self-heal-daemon: enable
cluster.halo-enabled: True
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
Post by Oğuz Yarımtepe
I have 4 node GlusterFS cluster. Used Centos SIG 4.1 repo.
# gluster peer status
Number of Peers: 3
Hostname: aslrplpgls02
Uuid: 0876151a-058e-42ec-91f2-f25f353a0207
State: Peer in Cluster (Connected)
Hostname: bslrplpgls01
Uuid: 6d73ed2a-2287-4872-9a8f-64d6e833181f
State: Peer in Cluster (Connected)
Hostname: bslrplpgls02
Uuid: 8ab6b61f-f502-44c7-8966-2ab03a6b9f7e
State: Peer in Cluster (Connected)
# gluster volume status vol0
Status of volume: vol0
Gluster process TCP Port RDMA Port Online
Pid
------------------------------------------------------------------------------
Brick aslrplpgls01:/bricks/brick1/vol0 49152 0 Y
12991
Brick aslrplpgls02:/bricks/brick2/vol0 49152 0 Y
9344
Brick bslrplpgls01:/bricks/brick3/vol0 49152 0 Y
61662
Brick bslrplpgls02:/bricks/brick4/vol0 49152 0 Y
61843
Self-heal Daemon on localhost N/A N/A Y
13014
Self-heal Daemon on bslrplpgls02 N/A N/A Y
61866
Self-heal Daemon on bslrplpgls01 N/A N/A Y
61685
Self-heal Daemon on aslrplpgls02 N/A N/A Y
9367
Task Status of Volume vol0
------------------------------------------------------------------------------
There are no active volume tasks
/dev/gluster_vg/gluster_lv /bricks/brick1 xfs defaults 1 2
[2018-10-25 19:37:23.033302] D [MSGID: 0]
0x7f0d04001038, vol0-write-behind returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033329] D [MSGID: 0]
0x7f0d04001038, vol0-io-cache returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033356] D [MSGID: 0]
0x7f0d04001038, vol0-quick-read returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033373] D [MSGID: 0]
0x7f0d04001038, vol0-md-cache returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033389] D [MSGID: 0]
0x7f0d04001038, vol0 returned -1 error: Transport endpoint is not connected
[Transport endpoint is not connected]
[2018-10-25 19:37:23.033408] W
00000000-0000-0000-0000-000000000001: failed to resolve (Transport endpoint
is not connected)
[2018-10-25 19:37:23.033426] E [fuse-bridge.c:928:fuse_getattr_resume]
0-glusterfs-fuse: 2: GETATTR 1 (00000000-0000-0000-0000-000000000001)
resolution failed
[2018-10-25 19:37:23.036511] D [MSGID: 0]
[dht-common.c:3468:dht_lookup] 0-vol0-dht: Calling fresh lookup for / on
vol0-replicate-0
[2018-10-25 19:37:23.037347] D [MSGID: 0]
0x7f0d04001038, vol0-replicate-0 returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.037375] D [MSGID: 0]
[dht-common.c:3020:dht_lookup_cbk] 0-vol0-dht: fresh_lookup returned for /
with op_ret -1 [Transport endpoint is not connected]
[2018-10-25 19:37:23.037940] D [MSGID: 0]
0x7f0d04001038, vol0-replicate-0 returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.037963] D [MSGID: 0]
[dht-common.c:1378:dht_lookup_dir_cbk] 0-vol0-dht: lookup of / on
vol0-replicate-0 returned error [Transport endpoint is not connected]
[2018-10-25 19:37:23.037979] E [MSGID: 101046]
[dht-common.c:1502:dht_lookup_dir_cbk] 0-vol0-dht: dict is null
[2018-10-25 19:37:23.037994] D [MSGID: 0]
0x7f0d04001038, vol0-dht returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038010] D [MSGID: 0]
0x7f0d04001038, vol0-write-behind returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038028] D [MSGID: 0]
0x7f0d04001038, vol0-io-cache returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038045] D [MSGID: 0]
0x7f0d04001038, vol0-quick-read returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038061] D [MSGID: 0]
0x7f0d04001038, vol0-md-cache returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038078] D [MSGID: 0]
0x7f0d04001038, vol0 returned -1 error: Transport endpoint is not connected
[Transport endpoint is not connected]
[2018-10-25 19:37:23.038096] W
00000000-0000-0000-0000-000000000001: failed to resolve (Transport endpoint
is not connected)
[2018-10-25 19:37:23.038110] E [fuse-bridge.c:928:fuse_getattr_resume]
0-glusterfs-fuse: 3: GETATTR 1 (00000000-0000-0000-0000-000000000001)
resolution failed
[2018-10-25 19:37:23.041169] D [fuse-bridge.c:5087:fuse_thread_proc]
0-glusterfs-fuse: terminating upon getting ENODEV when reading /dev/fuse
[2018-10-25 19:37:23.041196] I [fuse-bridge.c:5199:fuse_thread_proc]
0-fuse: initating unmount of /mnt/gluster
[2018-10-25 19:37:23.041306] D [logging.c:1795:gf_log_flush_extra_msgs]
0-logging-infra: Log buffer size reduced. About to flush 5 extra log
messages
[2018-10-25 19:37:23.041331] D [logging.c:1798:gf_log_flush_extra_msgs]
0-logging-infra: Just flushed 5 extra log messages
[2018-10-25 19:37:23.041398] W [glusterfsd.c:1514:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7e25) [0x7f0d24e0ae25]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x5594b73edd65]
received signum (15), shutting down
[2018-10-25 19:37:23.041417] D
[mgmt-pmap.c:79:rpc_clnt_mgmt_pmap_signout] 0-fsd-mgmt: portmapper signout
arguments not given
Unmounting '/mnt/gluster'.
Closing fuse connection to '/mnt/gluster'.
This is how i added mount point to fstab
10.35.72.138:/vol0 /mnt/gluster glusterfs
defaults,_netdev,log-level=DEBUG 0 0
Any idea what the problem is? I found some bug entries, not sure whether
this situation is a bug.
--
Oğuz Yarımtepe
http://about.me/oguzy
--
Oğuz Yarımtepe
http://about.me/oguzy
_______________________________________________
Post by Oğuz Yarımtepe
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Oğuz Yarımtepe
http://about.me/oguzy
Oğuz Yarımtepe
2018-10-28 05:06:21 UTC
Permalink
My two nodes are at another vlan. Should my client have connection to all
nodes, at replicated mod?

Regards.
Post by Poornima Gurusiddaiah
Is this a new volume? Has it never been mounted successfully? If so try
changing firewall settings to allow gluster ports, also check for selinux
settings.
Regards,
Poornima
Post by Oğuz Yarımtepe
# gluster volume info
Volume Name: vol0
Type: Replicate
Volume ID: 28384e2b-ea7e-407e-83ae-4d4e69a2cc7e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Brick1: aslrplpgls01:/bricks/brick1/vol0
Brick2: aslrplpgls02:/bricks/brick2/vol0
Brick3: bslrplpgls01:/bricks/brick3/vol0
Brick4: bslrplpgls02:/bricks/brick4/vol0
cluster.self-heal-daemon: enable
cluster.halo-enabled: True
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
Post by Oğuz Yarımtepe
I have 4 node GlusterFS cluster. Used Centos SIG 4.1 repo.
# gluster peer status
Number of Peers: 3
Hostname: aslrplpgls02
Uuid: 0876151a-058e-42ec-91f2-f25f353a0207
State: Peer in Cluster (Connected)
Hostname: bslrplpgls01
Uuid: 6d73ed2a-2287-4872-9a8f-64d6e833181f
State: Peer in Cluster (Connected)
Hostname: bslrplpgls02
Uuid: 8ab6b61f-f502-44c7-8966-2ab03a6b9f7e
State: Peer in Cluster (Connected)
# gluster volume status vol0
Status of volume: vol0
Gluster process TCP Port RDMA Port Online
Pid
------------------------------------------------------------------------------
Brick aslrplpgls01:/bricks/brick1/vol0 49152 0 Y
12991
Brick aslrplpgls02:/bricks/brick2/vol0 49152 0 Y
9344
Brick bslrplpgls01:/bricks/brick3/vol0 49152 0 Y
61662
Brick bslrplpgls02:/bricks/brick4/vol0 49152 0 Y
61843
Self-heal Daemon on localhost N/A N/A Y
13014
Self-heal Daemon on bslrplpgls02 N/A N/A Y
61866
Self-heal Daemon on bslrplpgls01 N/A N/A Y
61685
Self-heal Daemon on aslrplpgls02 N/A N/A Y
9367
Task Status of Volume vol0
------------------------------------------------------------------------------
There are no active volume tasks
/dev/gluster_vg/gluster_lv /bricks/brick1 xfs defaults 1 2
[2018-10-25 19:37:23.033302] D [MSGID: 0]
0x7f0d04001038, vol0-write-behind returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033329] D [MSGID: 0]
0x7f0d04001038, vol0-io-cache returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033356] D [MSGID: 0]
0x7f0d04001038, vol0-quick-read returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033373] D [MSGID: 0]
0x7f0d04001038, vol0-md-cache returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033389] D [MSGID: 0]
0x7f0d04001038, vol0 returned -1 error: Transport endpoint is not connected
[Transport endpoint is not connected]
[2018-10-25 19:37:23.033408] W
00000000-0000-0000-0000-000000000001: failed to resolve (Transport endpoint
is not connected)
[2018-10-25 19:37:23.033426] E [fuse-bridge.c:928:fuse_getattr_resume]
0-glusterfs-fuse: 2: GETATTR 1 (00000000-0000-0000-0000-000000000001)
resolution failed
[2018-10-25 19:37:23.036511] D [MSGID: 0]
[dht-common.c:3468:dht_lookup] 0-vol0-dht: Calling fresh lookup for / on
vol0-replicate-0
[2018-10-25 19:37:23.037347] D [MSGID: 0]
0x7f0d04001038, vol0-replicate-0 returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.037375] D [MSGID: 0]
[dht-common.c:3020:dht_lookup_cbk] 0-vol0-dht: fresh_lookup returned for /
with op_ret -1 [Transport endpoint is not connected]
[2018-10-25 19:37:23.037940] D [MSGID: 0]
0x7f0d04001038, vol0-replicate-0 returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.037963] D [MSGID: 0]
[dht-common.c:1378:dht_lookup_dir_cbk] 0-vol0-dht: lookup of / on
vol0-replicate-0 returned error [Transport endpoint is not connected]
[2018-10-25 19:37:23.037979] E [MSGID: 101046]
[dht-common.c:1502:dht_lookup_dir_cbk] 0-vol0-dht: dict is null
[2018-10-25 19:37:23.037994] D [MSGID: 0]
0x7f0d04001038, vol0-dht returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038010] D [MSGID: 0]
0x7f0d04001038, vol0-write-behind returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038028] D [MSGID: 0]
0x7f0d04001038, vol0-io-cache returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038045] D [MSGID: 0]
0x7f0d04001038, vol0-quick-read returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038061] D [MSGID: 0]
0x7f0d04001038, vol0-md-cache returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038078] D [MSGID: 0]
0x7f0d04001038, vol0 returned -1 error: Transport endpoint is not connected
[Transport endpoint is not connected]
[2018-10-25 19:37:23.038096] W
00000000-0000-0000-0000-000000000001: failed to resolve (Transport endpoint
is not connected)
[2018-10-25 19:37:23.038110] E [fuse-bridge.c:928:fuse_getattr_resume]
0-glusterfs-fuse: 3: GETATTR 1 (00000000-0000-0000-0000-000000000001)
resolution failed
[2018-10-25 19:37:23.041169] D [fuse-bridge.c:5087:fuse_thread_proc]
0-glusterfs-fuse: terminating upon getting ENODEV when reading /dev/fuse
[2018-10-25 19:37:23.041196] I [fuse-bridge.c:5199:fuse_thread_proc]
0-fuse: initating unmount of /mnt/gluster
[2018-10-25 19:37:23.041306] D [logging.c:1795:gf_log_flush_extra_msgs]
0-logging-infra: Log buffer size reduced. About to flush 5 extra log
messages
[2018-10-25 19:37:23.041331] D [logging.c:1798:gf_log_flush_extra_msgs]
0-logging-infra: Just flushed 5 extra log messages
[2018-10-25 19:37:23.041398] W [glusterfsd.c:1514:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7e25) [0x7f0d24e0ae25]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x5594b73edd65]
received signum (15), shutting down
[2018-10-25 19:37:23.041417] D
[mgmt-pmap.c:79:rpc_clnt_mgmt_pmap_signout] 0-fsd-mgmt: portmapper signout
arguments not given
Unmounting '/mnt/gluster'.
Closing fuse connection to '/mnt/gluster'.
This is how i added mount point to fstab
10.35.72.138:/vol0 /mnt/gluster glusterfs
defaults,_netdev,log-level=DEBUG 0 0
Any idea what the problem is? I found some bug entries, not sure whether
this situation is a bug.
--
Oğuz Yarımtepe
http://about.me/oguzy
--
Oğuz Yarımtepe
http://about.me/oguzy
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Oğuz Yarımtepe
http://about.me/oguzy
Vlad Kopylov
2018-10-31 03:46:40 UTC
Permalink
Try adding routes so it can connect to all.
Also curious if fuse mount does need access to all nodes. Supposedly it
does wright to all at the same time unless you have halo feature enabled.

v
Post by Oğuz Yarımtepe
My two nodes are at another vlan. Should my client have connection to all
nodes, at replicated mod?
Regards.
Post by Poornima Gurusiddaiah
Is this a new volume? Has it never been mounted successfully? If so try
changing firewall settings to allow gluster ports, also check for selinux
settings.
Regards,
Poornima
Post by Oğuz Yarımtepe
# gluster volume info
Volume Name: vol0
Type: Replicate
Volume ID: 28384e2b-ea7e-407e-83ae-4d4e69a2cc7e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Brick1: aslrplpgls01:/bricks/brick1/vol0
Brick2: aslrplpgls02:/bricks/brick2/vol0
Brick3: bslrplpgls01:/bricks/brick3/vol0
Brick4: bslrplpgls02:/bricks/brick4/vol0
cluster.self-heal-daemon: enable
cluster.halo-enabled: True
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
Post by Oğuz Yarımtepe
I have 4 node GlusterFS cluster. Used Centos SIG 4.1 repo.
# gluster peer status
Number of Peers: 3
Hostname: aslrplpgls02
Uuid: 0876151a-058e-42ec-91f2-f25f353a0207
State: Peer in Cluster (Connected)
Hostname: bslrplpgls01
Uuid: 6d73ed2a-2287-4872-9a8f-64d6e833181f
State: Peer in Cluster (Connected)
Hostname: bslrplpgls02
Uuid: 8ab6b61f-f502-44c7-8966-2ab03a6b9f7e
State: Peer in Cluster (Connected)
# gluster volume status vol0
Status of volume: vol0
Gluster process TCP Port RDMA Port
Online Pid
------------------------------------------------------------------------------
Brick aslrplpgls01:/bricks/brick1/vol0 49152 0 Y
12991
Brick aslrplpgls02:/bricks/brick2/vol0 49152 0 Y
9344
Brick bslrplpgls01:/bricks/brick3/vol0 49152 0 Y
61662
Brick bslrplpgls02:/bricks/brick4/vol0 49152 0 Y
61843
Self-heal Daemon on localhost N/A N/A Y
13014
Self-heal Daemon on bslrplpgls02 N/A N/A Y
61866
Self-heal Daemon on bslrplpgls01 N/A N/A Y
61685
Self-heal Daemon on aslrplpgls02 N/A N/A Y
9367
Task Status of Volume vol0
------------------------------------------------------------------------------
There are no active volume tasks
/dev/gluster_vg/gluster_lv /bricks/brick1 xfs defaults 1 2
[2018-10-25 19:37:23.033302] D [MSGID: 0]
0x7f0d04001038, vol0-write-behind returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033329] D [MSGID: 0]
0x7f0d04001038, vol0-io-cache returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033356] D [MSGID: 0]
0x7f0d04001038, vol0-quick-read returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033373] D [MSGID: 0]
0x7f0d04001038, vol0-md-cache returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033389] D [MSGID: 0]
0x7f0d04001038, vol0 returned -1 error: Transport endpoint is not connected
[Transport endpoint is not connected]
[2018-10-25 19:37:23.033408] W
00000000-0000-0000-0000-000000000001: failed to resolve (Transport endpoint
is not connected)
[2018-10-25 19:37:23.033426] E [fuse-bridge.c:928:fuse_getattr_resume]
0-glusterfs-fuse: 2: GETATTR 1 (00000000-0000-0000-0000-000000000001)
resolution failed
[2018-10-25 19:37:23.036511] D [MSGID: 0]
[dht-common.c:3468:dht_lookup] 0-vol0-dht: Calling fresh lookup for / on
vol0-replicate-0
[2018-10-25 19:37:23.037347] D [MSGID: 0]
0x7f0d04001038, vol0-replicate-0 returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.037375] D [MSGID: 0]
[dht-common.c:3020:dht_lookup_cbk] 0-vol0-dht: fresh_lookup returned for /
with op_ret -1 [Transport endpoint is not connected]
[2018-10-25 19:37:23.037940] D [MSGID: 0]
0x7f0d04001038, vol0-replicate-0 returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.037963] D [MSGID: 0]
[dht-common.c:1378:dht_lookup_dir_cbk] 0-vol0-dht: lookup of / on
vol0-replicate-0 returned error [Transport endpoint is not connected]
[2018-10-25 19:37:23.037979] E [MSGID: 101046]
[dht-common.c:1502:dht_lookup_dir_cbk] 0-vol0-dht: dict is null
[2018-10-25 19:37:23.037994] D [MSGID: 0]
0x7f0d04001038, vol0-dht returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038010] D [MSGID: 0]
0x7f0d04001038, vol0-write-behind returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038028] D [MSGID: 0]
0x7f0d04001038, vol0-io-cache returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038045] D [MSGID: 0]
0x7f0d04001038, vol0-quick-read returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038061] D [MSGID: 0]
0x7f0d04001038, vol0-md-cache returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038078] D [MSGID: 0]
0x7f0d04001038, vol0 returned -1 error: Transport endpoint is not connected
[Transport endpoint is not connected]
[2018-10-25 19:37:23.038096] W
00000000-0000-0000-0000-000000000001: failed to resolve (Transport endpoint
is not connected)
[2018-10-25 19:37:23.038110] E [fuse-bridge.c:928:fuse_getattr_resume]
0-glusterfs-fuse: 3: GETATTR 1 (00000000-0000-0000-0000-000000000001)
resolution failed
[2018-10-25 19:37:23.041169] D [fuse-bridge.c:5087:fuse_thread_proc]
0-glusterfs-fuse: terminating upon getting ENODEV when reading /dev/fuse
[2018-10-25 19:37:23.041196] I [fuse-bridge.c:5199:fuse_thread_proc]
0-fuse: initating unmount of /mnt/gluster
[2018-10-25 19:37:23.041306] D
[logging.c:1795:gf_log_flush_extra_msgs] 0-logging-infra: Log buffer size
reduced. About to flush 5 extra log messages
[2018-10-25 19:37:23.041331] D
[logging.c:1798:gf_log_flush_extra_msgs] 0-logging-infra: Just flushed 5
extra log messages
[2018-10-25 19:37:23.041398] W [glusterfsd.c:1514:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7e25) [0x7f0d24e0ae25]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x5594b73edd65]
received signum (15), shutting down
[2018-10-25 19:37:23.041417] D
[mgmt-pmap.c:79:rpc_clnt_mgmt_pmap_signout] 0-fsd-mgmt: portmapper signout
arguments not given
Unmounting '/mnt/gluster'.
Closing fuse connection to '/mnt/gluster'.
This is how i added mount point to fstab
10.35.72.138:/vol0 /mnt/gluster glusterfs
defaults,_netdev,log-level=DEBUG 0 0
Any idea what the problem is? I found some bug entries, not sure
whether this situation is a bug.
--
Oğuz Yarımtepe
http://about.me/oguzy
--
Oğuz Yarımtepe
http://about.me/oguzy
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Oğuz Yarımtepe
http://about.me/oguzy
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
Oğuz Yarımtepe
2018-10-31 04:31:14 UTC
Permalink
It was about halo enabled. When i disabled mounting was succesfull. It
seems just enabling is not enough.

So far i am happy with currwnt situation just need to figured out whether
all data is replicated. With big and even small files, replication is fast.
I tested with GB files.
Just need to know the replication latency with multiple volumes usage, one
volume and GB files, didnt realized a latency.
Post by Vlad Kopylov
Try adding routes so it can connect to all.
Also curious if fuse mount does need access to all nodes. Supposedly it
does wright to all at the same time unless you have halo feature enabled.
v
Post by Oğuz Yarımtepe
My two nodes are at another vlan. Should my client have connection to all
nodes, at replicated mod?
Regards.
On Fri, Oct 26, 2018 at 4:44 AM Poornima Gurusiddaiah <
Post by Poornima Gurusiddaiah
Is this a new volume? Has it never been mounted successfully? If so try
changing firewall settings to allow gluster ports, also check for selinux
settings.
Regards,
Poornima
Post by Oğuz Yarımtepe
# gluster volume info
Volume Name: vol0
Type: Replicate
Volume ID: 28384e2b-ea7e-407e-83ae-4d4e69a2cc7e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Brick1: aslrplpgls01:/bricks/brick1/vol0
Brick2: aslrplpgls02:/bricks/brick2/vol0
Brick3: bslrplpgls01:/bricks/brick3/vol0
Brick4: bslrplpgls02:/bricks/brick4/vol0
cluster.self-heal-daemon: enable
cluster.halo-enabled: True
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
On Thu, Oct 25, 2018 at 10:39 PM Oğuz Yarımtepe <
Post by Oğuz Yarımtepe
I have 4 node GlusterFS cluster. Used Centos SIG 4.1 repo.
# gluster peer status
Number of Peers: 3
Hostname: aslrplpgls02
Uuid: 0876151a-058e-42ec-91f2-f25f353a0207
State: Peer in Cluster (Connected)
Hostname: bslrplpgls01
Uuid: 6d73ed2a-2287-4872-9a8f-64d6e833181f
State: Peer in Cluster (Connected)
Hostname: bslrplpgls02
Uuid: 8ab6b61f-f502-44c7-8966-2ab03a6b9f7e
State: Peer in Cluster (Connected)
# gluster volume status vol0
Status of volume: vol0
Gluster process TCP Port RDMA Port
Online Pid
------------------------------------------------------------------------------
Brick aslrplpgls01:/bricks/brick1/vol0 49152 0 Y
12991
Brick aslrplpgls02:/bricks/brick2/vol0 49152 0 Y
9344
Brick bslrplpgls01:/bricks/brick3/vol0 49152 0 Y
61662
Brick bslrplpgls02:/bricks/brick4/vol0 49152 0 Y
61843
Self-heal Daemon on localhost N/A N/A Y
13014
Self-heal Daemon on bslrplpgls02 N/A N/A Y
61866
Self-heal Daemon on bslrplpgls01 N/A N/A Y
61685
Self-heal Daemon on aslrplpgls02 N/A N/A Y
9367
Task Status of Volume vol0
------------------------------------------------------------------------------
There are no active volume tasks
/dev/gluster_vg/gluster_lv /bricks/brick1 xfs defaults 1 2
[2018-10-25 19:37:23.033302] D [MSGID: 0]
0x7f0d04001038, vol0-write-behind returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033329] D [MSGID: 0]
0x7f0d04001038, vol0-io-cache returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033356] D [MSGID: 0]
0x7f0d04001038, vol0-quick-read returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033373] D [MSGID: 0]
0x7f0d04001038, vol0-md-cache returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.033389] D [MSGID: 0]
0x7f0d04001038, vol0 returned -1 error: Transport endpoint is not connected
[Transport endpoint is not connected]
[2018-10-25 19:37:23.033408] W
00000000-0000-0000-0000-000000000001: failed to resolve (Transport endpoint
is not connected)
[2018-10-25 19:37:23.033426] E
[fuse-bridge.c:928:fuse_getattr_resume] 0-glusterfs-fuse: 2: GETATTR 1
(00000000-0000-0000-0000-000000000001) resolution failed
[2018-10-25 19:37:23.036511] D [MSGID: 0]
[dht-common.c:3468:dht_lookup] 0-vol0-dht: Calling fresh lookup for / on
vol0-replicate-0
[2018-10-25 19:37:23.037347] D [MSGID: 0]
0x7f0d04001038, vol0-replicate-0 returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.037375] D [MSGID: 0]
[dht-common.c:3020:dht_lookup_cbk] 0-vol0-dht: fresh_lookup returned for /
with op_ret -1 [Transport endpoint is not connected]
[2018-10-25 19:37:23.037940] D [MSGID: 0]
0x7f0d04001038, vol0-replicate-0 returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.037963] D [MSGID: 0]
[dht-common.c:1378:dht_lookup_dir_cbk] 0-vol0-dht: lookup of / on
vol0-replicate-0 returned error [Transport endpoint is not connected]
[2018-10-25 19:37:23.037979] E [MSGID: 101046]
[dht-common.c:1502:dht_lookup_dir_cbk] 0-vol0-dht: dict is null
[2018-10-25 19:37:23.037994] D [MSGID: 0]
0x7f0d04001038, vol0-dht returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038010] D [MSGID: 0]
0x7f0d04001038, vol0-write-behind returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038028] D [MSGID: 0]
0x7f0d04001038, vol0-io-cache returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038045] D [MSGID: 0]
0x7f0d04001038, vol0-quick-read returned -1 error: Transport endpoint is
not connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038061] D [MSGID: 0]
0x7f0d04001038, vol0-md-cache returned -1 error: Transport endpoint is not
connected [Transport endpoint is not connected]
[2018-10-25 19:37:23.038078] D [MSGID: 0]
0x7f0d04001038, vol0 returned -1 error: Transport endpoint is not connected
[Transport endpoint is not connected]
[2018-10-25 19:37:23.038096] W
00000000-0000-0000-0000-000000000001: failed to resolve (Transport endpoint
is not connected)
[2018-10-25 19:37:23.038110] E
[fuse-bridge.c:928:fuse_getattr_resume] 0-glusterfs-fuse: 3: GETATTR 1
(00000000-0000-0000-0000-000000000001) resolution failed
[2018-10-25 19:37:23.041169] D [fuse-bridge.c:5087:fuse_thread_proc]
0-glusterfs-fuse: terminating upon getting ENODEV when reading /dev/fuse
[2018-10-25 19:37:23.041196] I [fuse-bridge.c:5199:fuse_thread_proc]
0-fuse: initating unmount of /mnt/gluster
[2018-10-25 19:37:23.041306] D
[logging.c:1795:gf_log_flush_extra_msgs] 0-logging-infra: Log buffer size
reduced. About to flush 5 extra log messages
[2018-10-25 19:37:23.041331] D
[logging.c:1798:gf_log_flush_extra_msgs] 0-logging-infra: Just flushed 5
extra log messages
[2018-10-25 19:37:23.041398] W [glusterfsd.c:1514:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7e25) [0x7f0d24e0ae25]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x5594b73edd65]
received signum (15), shutting down
[2018-10-25 19:37:23.041417] D
[mgmt-pmap.c:79:rpc_clnt_mgmt_pmap_signout] 0-fsd-mgmt: portmapper signout
arguments not given
Unmounting '/mnt/gluster'.
Closing fuse connection to '/mnt/gluster'.
This is how i added mount point to fstab
10.35.72.138:/vol0 /mnt/gluster glusterfs
defaults,_netdev,log-level=DEBUG 0 0
Any idea what the problem is? I found some bug entries, not sure
whether this situation is a bug.
--
Oğuz Yarımtepe
http://about.me/oguzy
--
Oğuz Yarımtepe
http://about.me/oguzy
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Oğuz Yarımtepe
http://about.me/oguzy
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Oğuz Yarımtepe
http://about.me/oguzy

Loading...