Post by Anoop C SThanks for explaining the issue.
I understand that you are experiencing hang while doing some operations on files/directories in a
GlusterFS volume share from a Windows client. For simplicity can you attach the output of following
# gluster volume info <volume>
# testparm -s --section-name global
gluster v status export
Status of volume: export
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.0.1.7:/bricks/hdds/brick 49153 0 Y 2540
Brick 10.0.1.6:/bricks/hdds/brick 49153 0 Y 2800
Self-heal Daemon on localhost N/A N/A Y 2912
Self-heal Daemon on 10.0.1.6 N/A N/A Y 3107
Self-heal Daemon on 10.0.1.5 N/A N/A Y 5877
Task Status of Volume export
------------------------------------------------------------------------------
There are no active volume tasks
# gluster volume info export
Volume Name: export
Type: Replicate
Volume ID: b4353b3f-6ef6-4813-819a-8e85e5a95cff
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.0.1.7:/bricks/hdds/brick
Brick2: 10.0.1.6:/bricks/hdds/brick
Options Reconfigured:
diagnostics.brick-log-level: INFO
diagnostics.client-log-level: INFO
performance.cache-max-file-size: 256MB
client.event-threads: 5
server.event-threads: 5
cluster.readdir-optimize: on
cluster.lookup-optimize: on
performance.io-cache: on
performance.io-thread-count: 64
nfs.disable: on
cluster.server-quorum-type: server
performance.cache-size: 10GB
server.allow-insecure: on
transport.address-family: inet
performance.cache-samba-metadata: on
features.cache-invalidation-timeout: 600
performance.md-cache-timeout: 600
features.cache-invalidation: on
performance.cache-invalidation: on
network.inode-lru-limit: 65536
performance.cache-min-file-size: 0
performance.stat-prefetch: on
cluster.server-quorum-ratio: 51%
I had sent you the full smb.conf, so no need to run testparm -s
--section-name global, please reference:
http://termbin.com/y4j0
Post by Anoop C SPost by Diego Remolina[vfsgluster]
path = /vfsgluster
browseable = yes
create mask = 660
directory mask = 770
kernel share modes = No
vfs objects = glusterfs
glusterfs:loglevel = 7
glusterfs:logfile = /var/log/samba/glusterfs-vfsgluster.log
glusterfs:volume = export
Since you have mentioned path as /vfsgluster I hope you are sharing a subdirectory under root of the
volume.
Yes, vfsgluster is a directory at the root of the export volume. It is
also currently mounted in /export so that the rest of the files can be
exported via samba with fuse mounts:
# mount | grep export
10.0.1.7:/export on /export type fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,allow_other,max_read=131072)
# ls -ld /export/vfsgluster
drwxrws---. 4 dijuremo Staff 4096 Nov 12 20:24 /export/vfsgluster
Post by Anoop C SPost by Diego RemolinaFull smb.conf
http://termbin.com/y4j0
I see the "clustering" parameter set to 'yes'. How many nodes are there in the cluster? Out of those
how many are running as samba and/or gluster nodes?
There are a total of 3 gluster peers, but only two have bricks. The
other is just present, but not even configured as an arbiter. Two of
the nodes with bricks run ctdb and samba.
Post by Anoop C SPost by Diego Remolina/var/log/samba/glusterfs-vfsgluster.log
http://termbin.com/5hdr
Please let me know if there is any other information I can provide.
Are there any errors in /var/log/samba/log.<IP/hostname>? IP/hostname = Windows client machine
I do not currently have the log file directive enabled in smb.conf, I
would have to enable it. Do you need me to repeat the process with it?
Diego