Discussion:
[Gluster-users] Mount failed
Bartłomiej Syryjczyk
2015-01-21 13:46:29 UTC
Permalink
I've got problem with mount. Anyone help?

# mount -t glusterfs apache1:/testvol /mnt/gluster
*Mount failed. Please check the log file for more details.*

Log: http://pastebin.com/GzkbEGCw

Oracle Linux Server release 7.0
Kernel 3.8.13-55.1.2.el7uek.x86_64

glusterfs packages from official yum repository

Name : glusterfs
Arch : x86_64
Version : 3.6.1
Release : 1.el7

--
Z poważaniem,

*Bartłomiej Syryjczyk*
Lindsay Mathieson
2015-01-21 14:22:37 UTC
Permalink
On 21 January 2015 at 23:46, Bartłomiej Syryjczyk <***@kamsoft.pl>
wrote:

> # mount -t glusterfs apache1:/testvol /mnt/gluster
> *Mount failed. Please check the log file for more details.*
>
> Log: http://pastebin.com/GzkbEGCw
>
> Oracle Linux Server release 7.0
> Kernel 3.8.13-55.1.2.el7uek.x86_64
>
> glusterfs packages from official yum repository
>
> Name : glusterfs
> Arch : x86_64
> Version : 3.6.1
> Release : 1.el7
>



From the logs it looks like your server and client versions don't match -
client version = 3.3


--
Lindsay
Bartłomiej Syryjczyk
2015-01-21 17:35:08 UTC
Permalink
W dniu 2015-01-21 o 15:22, Lindsay Mathieson pisze:
>
> On 21 January 2015 at 23:46, Bartłomiej Syryjczyk
> <***@kamsoft.pl <mailto:***@kamsoft.pl>> wrote:
>
> # mount -t glusterfs apache1:/testvol /mnt/gluster
> *Mount failed. Please check the log file for more details.*
>
> Log: http://pastebin.com/GzkbEGCw
>
> Oracle Linux Server release 7.0
> Kernel 3.8.13-55.1.2.el7uek.x86_64
>
> glusterfs packages from official yum repository
>
> Name : glusterfs
> Arch : x86_64
> Version : 3.6.1
> Release : 1.el7
>
>
>
>
> From the logs it looks like your server and client versions don't
> match - client version = 3.3
>
Client is 3.6.1

# yum info glusterfs{,-server,-fuse,-geo-replication}
Installed Packages
Name : glusterfs
Arch : x86_64
Version : 3.6.1
Release : 1.el7
Size : 5.1 M
Repo : installed
From repo : glusterfs-epel
Summary : Cluster File System
URL : http://www.gluster.org/docs/index.php/GlusterFS
License : GPLv2 or LGPLv3+
Description : GlusterFS is a distributed file-system capable of scaling to
: several petabytes. It aggregates various storage bricks over
: Infiniband RDMA or TCP/IP interconnect into one large parallel
: network file system. GlusterFS is one of the most
sophisticated
: file systems in terms of features and extensibility. It
borrows a
: powerful concept called Translators from GNU Hurd kernel.
Much of
: the code in GlusterFS is in user space and easily manageable.
:
: This package includes the glusterfs binary, the glusterfsd
daemon
: and the gluster command line, libglusterfs and glusterfs
: translator modules common to both GlusterFS server and client
: framework.

Name : glusterfs-fuse
Arch : x86_64
Version : 3.6.1
Release : 1.el7
Size : 221 k
Repo : installed
From repo : glusterfs-epel
Summary : Fuse client
URL : http://www.gluster.org/docs/index.php/GlusterFS
License : GPLv2 or LGPLv3+
Description : GlusterFS is a distributed file-system capable of scaling to
: several petabytes. It aggregates various storage bricks over
: Infiniband RDMA or TCP/IP interconnect into one large parallel
: network file system. GlusterFS is one of the most
sophisticated
: file systems in terms of features and extensibility. It
borrows a
: powerful concept called Translators from GNU Hurd kernel.
Much of
: the code in GlusterFS is in user space and easily manageable.
:
: This package provides support to FUSE based clients.

Name : glusterfs-geo-replication
Arch : x86_64
Version : 3.6.1
Release : 1.el7
Size : 655 k
Repo : installed
From repo : glusterfs-epel
Summary : GlusterFS Geo-replication
URL : http://www.gluster.org/docs/index.php/GlusterFS
License : GPLv2 or LGPLv3+
Description : GlusterFS is a distributed file-system capable of scaling to
: several peta-bytes. It aggregates various storage bricks over
: Infiniband RDMA or TCP/IP interconnect into one large parallel
: network file system. GlusterFS is one of the most
sophisticated
: file system in terms of features and extensibility. It
borrows a
: powerful concept called Translators from GNU Hurd kernel.
Much of
: the code in GlusterFS is in userspace and easily manageable.
:
: This package provides support to geo-replication.

Name : glusterfs-server
Arch : x86_64
Version : 3.6.1
Release : 1.el7
Size : 2.2 M
Repo : installed
From repo : glusterfs-epel
Summary : Clustered file-system server
URL : http://www.gluster.org/docs/index.php/GlusterFS
License : GPLv2 or LGPLv3+
Description : GlusterFS is a distributed file-system capable of scaling to
: several petabytes. It aggregates various storage bricks over
: Infiniband RDMA or TCP/IP interconnect into one large parallel
: network file system. GlusterFS is one of the most
sophisticated
: file systems in terms of features and extensibility. It
borrows a
: powerful concept called Translators from GNU Hurd kernel.
Much of
: the code in GlusterFS is in user space and easily manageable.
:
: This package provides the glusterfs server daemon.

# gluster --version
glusterfs 3.6.1 built on Nov 7 2014 15:16:40
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU
General Public License.

# glusterd --version
glusterfs 3.6.1 built on Nov 7 2014 15:16:38
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.


--
Z poważaniem,

*Bartłomiej Syryjczyk*
Bartłomiej Syryjczyk
2015-01-22 11:48:16 UTC
Permalink
W dniu 2015-01-21 o 18:35, Bartłomiej Syryjczyk pisze:
> W dniu 2015-01-21 o 15:22, Lindsay Mathieson pisze:
>> On 21 January 2015 at 23:46, Bartłomiej Syryjczyk
>> <***@kamsoft.pl <mailto:***@kamsoft.pl>> wrote:
>>
>> # mount -t glusterfs apache1:/testvol /mnt/gluster
>> *Mount failed. Please check the log file for more details.*
>>
>> Log: http://pastebin.com/GzkbEGCw
>>
>> Oracle Linux Server release 7.0
>> Kernel 3.8.13-55.1.2.el7uek.x86_64
>>
>> glusterfs packages from official yum repository
>>
>> Name : glusterfs
>> Arch : x86_64
>> Version : 3.6.1
>> Release : 1.el7
>>
>>
>>
>>
>> From the logs it looks like your server and client versions don't
>> match - client version = 3.3
>>
> Client is 3.6.1
> [...]
I add that mount exported nfs works good. There is only problem with
`mount -t glusterfs'

mount.glusterfs -V also shows 3.6.1

Why it thinks that is 3.3?

--
Z poważaniem,

*Bartłomiej Syryjczyk*
Bartłomiej Syryjczyk
2015-01-19 10:28:45 UTC
Permalink
I've got problem with mount. Anyone help?

# mount -t glusterfs apache1:/testvol /mnt/gluster
*Mount failed. Please check the log file for more details.*

Log: http://pastebin.com/GzkbEGCw

Oracle Linux Server release 7.0
Kernel 3.8.13-55.1.2.el7uek.x86_64

glusterfs packages from official yum repository

Name : glusterfs
Arch : x86_64
Version : 3.6.1
Release : 1.el7

--
Z poważaniem,

*Bartłomiej Syryjczyk*
A Ghoshal
2015-01-22 16:37:43 UTC
Permalink
Maybe start the mount daemon from shell, like this?

/usr/sbin/glusterfs --debug --volfile-server=glnode1 --volfile-id=/testvol
/mnt/gluster

You could get some useful debug data on your terminal.

However, it's more likely you have a configuration related problem here.
So the output of the following might also help:

ls -la /var/lib/glusterd/vols/glnode1/

Thanks,
Anirban



From: Bart³omiej Syryjczyk <***@kamsoft.pl>
To: gluster-***@gluster.org
Date: 01/22/2015 09:52 PM
Subject: [Gluster-users] Mount failed
Sent by: gluster-users-***@gluster.org



I've got problem with mount. Anyone help?

# mount -t glusterfs apache1:/testvol /mnt/gluster
*Mount failed. Please check the log file for more details.*

Log: http://pastebin.com/GzkbEGCw

Oracle Linux Server release 7.0
Kernel 3.8.13-55.1.2.el7uek.x86_64

glusterfs packages from official yum repository

Name : glusterfs
Arch : x86_64
Version : 3.6.1
Release : 1.el7

--
Z powa¿aniem,

*Bart³omiej Syryjczyk*
_______________________________________________
Gluster-users mailing list
Gluster-***@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
Bartłomiej Syryjczyk
2015-01-26 08:57:40 UTC
Permalink
W dniu 2015-01-22 o 17:37, A Ghoshal pisze:
> Maybe start the mount daemon from shell, like this?
>
> /usr/sbin/glusterfs --debug --volfile-server=glnode1
> --volfile-id=/testvol /mnt/gluster
>
> You could get some useful debug data on your terminal.
>
> However, it's more likely you have a configuration related problem
> here. So the output of the following might also help:
>
> ls -la /var/lib/glusterd/vols/glnode1/
>
When I start mount daemon from shell it's works fine.
After mount with -o log-level=DEBUG, I can see this:
---
[...]
[2015-01-26 08:19:48.191675] D [fuse-bridge.c:4817:fuse_thread_proc]
0-glusterfs-fuse: *terminating upon getting ENODEV when reading /dev/fuse*
[2015-01-26 08:19:48.191702] I [fuse-bridge.c:4921:fuse_thread_proc]
0-fuse: unmounting /mnt/gluster
[2015-01-26 08:19:48.191759] D [MSGID: 0]
[dht-diskusage.c:96:dht_du_info_cbk] 0-testvol-dht: subvolume
'testvol-replicate-0': avail_percent is: 83.00 and avail_space is:
19188232192 and avail_inodes is: 99.00
[2015-01-26 08:19:48.191801] D [logging.c:1740:gf_log_flush_extra_msgs]
0-logging-infra: Log buffer size reduced. About to flush 5 extra log
messages
[2015-01-26 08:19:48.191822] D [logging.c:1743:gf_log_flush_extra_msgs]
0-logging-infra: Just flushed 5 extra log messages
[2015-01-26 08:19:48.192118] W [glusterfsd.c:1194:cleanup_and_exit] (-->
0-: received signum (15), shutting down
[2015-01-26 08:19:48.192137] D
[glusterfsd-mgmt.c:2244:glusterfs_mgmt_pmap_signout] 0-fsd-mgmt:
portmapper signout arguments not given
[2015-01-26 08:19:48.192145] I [fuse-bridge.c:5599:fini] 0-fuse:
Unmounting '/mnt/gluster'.
---
And it won't work

Output of few commands:
---
[***@apache2 ~]# lsmod|grep fuse
fuse 75687 3

[***@apache2 ~]# ls -l /dev/fuse
crw-rw-rw- 1 root root 10, 229 Jan 26 09:19 /dev/fuse

[***@apache2 ~]# ls -la /var/lib/glusterd/vols/testvol/
total 48
drwxr-xr-x 4 root root 4096 Jan 26 09:09 .
drwxr-xr-x. 3 root root 20 Jan 26 07:11 ..
drwxr-xr-x 2 root root 48 Jan 26 08:02 bricks
-rw------- 1 root root 16 Jan 26 09:09 cksum
-rw------- 1 root root 545 Jan 26 08:02 info
-rw------- 1 root root 93 Jan 26 08:02 node_state.info
-rw------- 1 root root 18 Jan 26 09:09 quota.cksum
-rw------- 1 root root 0 Jan 26 07:37 quota.conf
-rw------- 1 root root 12 Jan 26 08:02 rbstate
drwxr-xr-x 2 root root 30 Jan 26 09:09 run
-rw------- 1 root root 13 Jan 26 08:02 snapd.info
-rw------- 1 root root 1995 Jan 26 08:02 testvol.apache1.brick.vol
-rw------- 1 root root 1995 Jan 26 08:02 testvol.apache2.brick.vol
-rw------- 1 root root 1392 Jan 26 08:02 testvol-rebalance.vol
-rw------- 1 root root 1392 Jan 26 08:02 testvol.tcp-fuse.vol
-rw------- 1 root root 1620 Jan 26 08:02 trusted-testvol.tcp-fuse.vol
---

--
Z poważaniem,

*Bartłomiej Syryjczyk*
A Ghoshal
2015-01-26 16:08:29 UTC
Permalink
The logs you show me are reminiscent of an issue I once faced and all it turned out to be was that service glusterd was not running. Did you check that out?

Sorry if I'm stating the obvious, though. ;)

Thanks,
Anirban

-----Bart&#322;omiej Syryjczyk <***@kamsoft.pl> wrote: -----

=======================
To:
From: Bart&#322;omiej Syryjczyk <***@kamsoft.pl>
Date: 01/26/2015 02:28PM
Cc: gluster-***@gluster.org
Subject: Re: [Gluster-users] Mount failed
=======================
W dniu 2015-01-22 o 17:37, A Ghoshal pisze:
> Maybe start the mount daemon from shell, like this?
>
> /usr/sbin/glusterfs --debug --volfile-server=glnode1
> --volfile-id=/testvol /mnt/gluster
>
> You could get some useful debug data on your terminal.
>
> However, it's more likely you have a configuration related problem
> here. So the output of the following might also help:
>
> ls -la /var/lib/glusterd/vols/glnode1/
>
When I start mount daemon from shell it's works fine.
After mount with -o log-level=DEBUG, I can see this:
---
[...]
[2015-01-26 08:19:48.191675] D [fuse-bridge.c:4817:fuse_thread_proc]
0-glusterfs-fuse: *terminating upon getting ENODEV when reading /dev/fuse*
[2015-01-26 08:19:48.191702] I [fuse-bridge.c:4921:fuse_thread_proc]
0-fuse: unmounting /mnt/gluster
[2015-01-26 08:19:48.191759] D [MSGID: 0]
[dht-diskusage.c:96:dht_du_info_cbk] 0-testvol-dht: subvolume
'testvol-replicate-0': avail_percent is: 83.00 and avail_space is:
19188232192 and avail_inodes is: 99.00
[2015-01-26 08:19:48.191801] D [logging.c:1740:gf_log_flush_extra_msgs]
0-logging-infra: Log buffer size reduced. About to flush 5 extra log
messages
[2015-01-26 08:19:48.191822] D [logging.c:1743:gf_log_flush_extra_msgs]
0-logging-infra: Just flushed 5 extra log messages
[2015-01-26 08:19:48.192118] W [glusterfsd.c:1194:cleanup_and_exit] (-->
0-: received signum (15), shutting down
[2015-01-26 08:19:48.192137] D
[glusterfsd-mgmt.c:2244:glusterfs_mgmt_pmap_signout] 0-fsd-mgmt:
portmapper signout arguments not given
[2015-01-26 08:19:48.192145] I [fuse-bridge.c:5599:fini] 0-fuse:
Unmounting '/mnt/gluster'.
---
And it won't work

Output of few commands:
---
[***@apache2 ~]# lsmod|grep fuse
fuse 75687 3

[***@apache2 ~]# ls -l /dev/fuse
crw-rw-rw- 1 root root 10, 229 Jan 26 09:19 /dev/fuse

[***@apache2 ~]# ls -la /var/lib/glusterd/vols/testvol/
total 48
drwxr-xr-x 4 root root 4096 Jan 26 09:09 .
drwxr-xr-x. 3 root root 20 Jan 26 07:11 ..
drwxr-xr-x 2 root root 48 Jan 26 08:02 bricks
-rw------- 1 root root 16 Jan 26 09:09 cksum
-rw------- 1 root root 545 Jan 26 08:02 info
-rw------- 1 root root 93 Jan 26 08:02 node_state.info
-rw------- 1 root root 18 Jan 26 09:09 quota.cksum
-rw------- 1 root root 0 Jan 26 07:37 quota.conf
-rw------- 1 root root 12 Jan 26 08:02 rbstate
drwxr-xr-x 2 root root 30 Jan 26 09:09 run
-rw------- 1 root root 13 Jan 26 08:02 snapd.info
-rw------- 1 root root 1995 Jan 26 08:02 testvol.apache1.brick.vol
-rw------- 1 root root 1995 Jan 26 08:02 testvol.apache2.brick.vol
-rw------- 1 root root 1392 Jan 26 08:02 testvol-rebalance.vol
-rw------- 1 root root 1392 Jan 26 08:02 testvol.tcp-fuse.vol
-rw------- 1 root root 1620 Jan 26 08:02 trusted-testvol.tcp-fuse.vol
---

--
Z powa&#380;aniem,

*Bart&#322;omiej Syryjczyk*
_______________________________________________
Gluster-users mailing list
Gluster-***@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
Bartłomiej Syryjczyk
2015-01-27 06:25:25 UTC
Permalink
W dniu 2015-01-26 o 17:08, A Ghoshal pisze:
>
> The logs you show me are reminiscent of an issue I once faced and all it turned out to be was that service glusterd was not running. Did you check that out?
>
> Sorry if I'm stating the obvious, though. ;)
It's running
---
[***@apache2 ~]# ps -fwwC glusterd,glusterfs
UID PID PPID C STIME TTY TIME CMD
root 1475 1 0 Jan26 ? 00:00:30 /usr/sbin/glusterd -p
/var/run/glusterd.pid
root 2365 1 0 Jan26 ? 00:00:50 /usr/sbin/glusterfs -s
localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid
-l /var/log/glusterfs/nfs.log -S
/var/run/f4327148b8eb4b9b9cfaea6ebe1dfd90.socket
root 2372 1 0 Jan26 ? 00:00:51 /usr/sbin/glusterfs -s
localhost --volfile-id gluster/glustershd -p
/var/lib/glusterd/glustershd/run/glustershd.pid -l
/var/log/glusterfs/glustershd.log -S
/var/run/3e4d4428b4cbe19746411c9122b6c5c0.socket --xlator-option
*replicate*.node-uuid=4242af2d-55ae-49f4-88a0-7a8d714015c6

[***@apache2 ~]# systemctl status glusterfsd
glusterfsd.service - GlusterFS brick processes (stopping only)
Loaded: loaded (/usr/lib/systemd/system/glusterfsd.service; enabled)
Active: active (exited) since Mon 2015-01-26 21:25:01 CET; 9h ago
Process: 3004 ExecStop=/bin/sh -c /bin/killall --wait glusterfsd ||
/bin/true (code=exited, status=0/SUCCESS)
Process: 3009 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 3009 (code=exited, status=0/SUCCESS)

Jan 26 21:25:01 apache2.kamsoft.local systemd[1]: Started GlusterFS
brick processes (stopping only).

[***@apache2 ~]# systemctl status glusterd
glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
Active: active (running) since Mon 2015-01-26 12:11:09 CET; 19h ago
Process: 1316 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid
(code=exited, status=0/SUCCESS)
Main PID: 1475 (glusterd)
CGroup: /system.slice/glusterd.service
├─1475 /usr/sbin/glusterd -p /var/run/glusterd.pid
├─2365 /usr/sbin/glusterfs -s localhost --volfile-id
gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l
/var/log/glusterfs/nfs.log -S
/var/run/f4327148b8eb4b9b9cfaea6ebe1dfd90.socket
├─2372 /usr/sbin/glusterfs -s localhost --volfile-id
gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l
/var/log/glusterfs/glustershd.log -S
/var/run/3e4d4428b4cbe19746411c9122b6c5c0.socket --xlator-op...
└─2381 /sbin/rpc.statd

Jan 26 12:11:06 apache2.kamsoft.local systemd[1]: Starting GlusterFS, a
clustered file-system server...
Jan 26 12:11:09 apache2.kamsoft.local systemd[1]: Started GlusterFS, a
clustered file-system server.
Jan 26 12:11:17 apache2.kamsoft.local rpc.statd[2381]: Version 1.3.0
starting
---

--
Z poważaniem,

*Bartłomiej Syryjczyk*
Pranith Kumar Karampuri
2015-01-27 01:54:04 UTC
Permalink
On 01/26/2015 02:27 PM, Bartłomiej Syryjczyk wrote:
> W dniu 2015-01-22 o 17:37, A Ghoshal pisze:
>> Maybe start the mount daemon from shell, like this?
>>
>> /usr/sbin/glusterfs --debug --volfile-server=glnode1
>> --volfile-id=/testvol /mnt/gluster
>>
>> You could get some useful debug data on your terminal.
>>
>> However, it's more likely you have a configuration related problem
>> here. So the output of the following might also help:
>>
>> ls -la /var/lib/glusterd/vols/glnode1/
>>
> When I start mount daemon from shell it's works fine.
> After mount with -o log-level=DEBUG, I can see this:
> ---
> [...]
> [2015-01-26 08:19:48.191675] D [fuse-bridge.c:4817:fuse_thread_proc]
> 0-glusterfs-fuse: *terminating upon getting ENODEV when reading /dev/fuse*
Could you check if the device is present or not, using the following
command?
07:16:10 :) ⚡ ls /dev/fuse -l
crw-rw-rw-. 1 root root 10, 229 Jan 25 07:31 /dev/fuse

Pranith
> [2015-01-26 08:19:48.191702] I [fuse-bridge.c:4921:fuse_thread_proc]
> 0-fuse: unmounting /mnt/gluster
> [2015-01-26 08:19:48.191759] D [MSGID: 0]
> [dht-diskusage.c:96:dht_du_info_cbk] 0-testvol-dht: subvolume
> 'testvol-replicate-0': avail_percent is: 83.00 and avail_space is:
> 19188232192 and avail_inodes is: 99.00
> [2015-01-26 08:19:48.191801] D [logging.c:1740:gf_log_flush_extra_msgs]
> 0-logging-infra: Log buffer size reduced. About to flush 5 extra log
> messages
> [2015-01-26 08:19:48.191822] D [logging.c:1743:gf_log_flush_extra_msgs]
> 0-logging-infra: Just flushed 5 extra log messages
> [2015-01-26 08:19:48.192118] W [glusterfsd.c:1194:cleanup_and_exit] (-->
> 0-: received signum (15), shutting down
> [2015-01-26 08:19:48.192137] D
> [glusterfsd-mgmt.c:2244:glusterfs_mgmt_pmap_signout] 0-fsd-mgmt:
> portmapper signout arguments not given
> [2015-01-26 08:19:48.192145] I [fuse-bridge.c:5599:fini] 0-fuse:
> Unmounting '/mnt/gluster'.
> ---
> And it won't work
>
> Output of few commands:
> ---
> [***@apache2 ~]# lsmod|grep fuse
> fuse 75687 3
>
> [***@apache2 ~]# ls -l /dev/fuse
> crw-rw-rw- 1 root root 10, 229 Jan 26 09:19 /dev/fuse
>
> [***@apache2 ~]# ls -la /var/lib/glusterd/vols/testvol/
> total 48
> drwxr-xr-x 4 root root 4096 Jan 26 09:09 .
> drwxr-xr-x. 3 root root 20 Jan 26 07:11 ..
> drwxr-xr-x 2 root root 48 Jan 26 08:02 bricks
> -rw------- 1 root root 16 Jan 26 09:09 cksum
> -rw------- 1 root root 545 Jan 26 08:02 info
> -rw------- 1 root root 93 Jan 26 08:02 node_state.info
> -rw------- 1 root root 18 Jan 26 09:09 quota.cksum
> -rw------- 1 root root 0 Jan 26 07:37 quota.conf
> -rw------- 1 root root 12 Jan 26 08:02 rbstate
> drwxr-xr-x 2 root root 30 Jan 26 09:09 run
> -rw------- 1 root root 13 Jan 26 08:02 snapd.info
> -rw------- 1 root root 1995 Jan 26 08:02 testvol.apache1.brick.vol
> -rw------- 1 root root 1995 Jan 26 08:02 testvol.apache2.brick.vol
> -rw------- 1 root root 1392 Jan 26 08:02 testvol-rebalance.vol
> -rw------- 1 root root 1392 Jan 26 08:02 testvol.tcp-fuse.vol
> -rw------- 1 root root 1620 Jan 26 08:02 trusted-testvol.tcp-fuse.vol
> ---
>
Bartłomiej Syryjczyk
2015-01-27 06:28:09 UTC
Permalink
W dniu 2015-01-27 o 02:54, Pranith Kumar Karampuri pisze:
>
> On 01/26/2015 02:27 PM, Bartłomiej Syryjczyk wrote:
>> W dniu 2015-01-22 o 17:37, A Ghoshal pisze:
>>> Maybe start the mount daemon from shell, like this?
>>>
>>> /usr/sbin/glusterfs --debug --volfile-server=glnode1
>>> --volfile-id=/testvol /mnt/gluster
>>>
>>> You could get some useful debug data on your terminal.
>>>
>>> However, it's more likely you have a configuration related problem
>>> here. So the output of the following might also help:
>>>
>>> ls -la /var/lib/glusterd/vols/glnode1/
>>>
>> When I start mount daemon from shell it's works fine.
>> After mount with -o log-level=DEBUG, I can see this:
>> ---
>> [...]
>> [2015-01-26 08:19:48.191675] D [fuse-bridge.c:4817:fuse_thread_proc]
>> 0-glusterfs-fuse: *terminating upon getting ENODEV when reading
>> /dev/fuse*
> Could you check if the device is present or not, using the following
> command?
> 07:16:10 :) ⚡ ls /dev/fuse -l
> crw-rw-rw-. 1 root root 10, 229 Jan 25 07:31 /dev/fuse
I sent it earlier (difference is that I've got SELinux disabled)

>>
>> [***@apache2 ~]# ls -l /dev/fuse
>> crw-rw-rw- 1 root root 10, 229 Jan 26 09:19 /dev/fuse

--
Z poważaniem,

*Bartłomiej Syryjczyk*
Franco Broi
2015-01-27 06:45:51 UTC
Permalink
Must be something wrong with your mount.glusterfs script, you could try
running it with sh -x to see what command it tries to run.

On Tue, 2015-01-27 at 07:28 +0100, Bartłomiej Syryjczyk wrote:
> W dniu 2015-01-27 o 02:54, Pranith Kumar Karampuri pisze:
> >
> > On 01/26/2015 02:27 PM, Bartłomiej Syryjczyk wrote:
> >> W dniu 2015-01-22 o 17:37, A Ghoshal pisze:
> >>> Maybe start the mount daemon from shell, like this?
> >>>
> >>> /usr/sbin/glusterfs --debug --volfile-server=glnode1
> >>> --volfile-id=/testvol /mnt/gluster
> >>>
> >>> You could get some useful debug data on your terminal.
> >>>
> >>> However, it's more likely you have a configuration related problem
> >>> here. So the output of the following might also help:
> >>>
> >>> ls -la /var/lib/glusterd/vols/glnode1/
> >>>
> >> When I start mount daemon from shell it's works fine.
> >> After mount with -o log-level=DEBUG, I can see this:
> >> ---
> >> [...]
> >> [2015-01-26 08:19:48.191675] D [fuse-bridge.c:4817:fuse_thread_proc]
> >> 0-glusterfs-fuse: *terminating upon getting ENODEV when reading
> >> /dev/fuse*
> > Could you check if the device is present or not, using the following
> > command?
> > 07:16:10 :) ⚡ ls /dev/fuse -l
> > crw-rw-rw-. 1 root root 10, 229 Jan 25 07:31 /dev/fuse
> I sent it earlier (difference is that I've got SELinux disabled)
>
> >>
> >> [***@apache2 ~]# ls -l /dev/fuse
> >> crw-rw-rw- 1 root root 10, 229 Jan 26 09:19 /dev/fuse
>
Bartłomiej Syryjczyk
2015-01-27 07:29:20 UTC
Permalink
W dniu 2015-01-27 o 07:45, Franco Broi pisze:
> Must be something wrong with your mount.glusterfs script, you could try
> running it with sh -x to see what command it tries to run.
This is output:
---
[***@apache2 ~]# sh -x mount.glusterfs apache1:/testvol /mnt/gluster
+ _init apache1:/testvol /mnt/gluster
+ LOG_NONE=NONE
+ LOG_CRITICAL=CRITICAL
+ LOG_ERROR=ERROR
+ LOG_WARNING=WARNING
+ LOG_INFO=INFO
+ LOG_DEBUG=DEBUG
+ LOG_TRACE=TRACE
+ HOST_NAME_MAX=64
+ prefix=/usr
+ exec_prefix=/usr
++ echo /usr/sbin/glusterfs
+ cmd_line=/usr/sbin/glusterfs
+ export PATH
++ which getfattr
+ getfattr=
+ '[' 1 -ne 0 ']'
+ warn 'WARNING: getfattr not found, certain checks will be skipped..'
+ echo 'WARNING: getfattr not found, certain checks will be skipped..'
WARNING: getfattr not found, certain checks will be skipped..
+ mounttab=/proc/mounts
++ uname -s
+ uname_s=Linux
+ case ${uname_s} in
+ getinode='stat -c %i'
+ getdev='stat -c %d'
+ lgetinode='stat -c %i -L'
+ lgetdev='stat -c %d -L'
+ UPDATEDBCONF=/etc/updatedb.conf
+ main apache1:/testvol /mnt/gluster
+ '[' xLinux = xLinux ']'
+ volfile_loc=apache1:/testvol
+ mount_point=/mnt/gluster
+ shift 2
+ getopts Vo:hn opt
+ '[' xLinux = xNetBSD ']'
+ '[' -r apache1:/testvol ']'
++ echo apache1:/testvol
++ sed -n 's/\([a-zA-Z0-9:.\-]*\):.*/\1/p'
+ server_ip=apache1
++ echo apache1:/testvol
++ sed -n 's/.*:\([^ ]*\).*/\1/p'
+ volume_str=/testvol
+ '[' -n /testvol ']'
+ volume_id=/testvol
+ volfile_loc=
+ '[' -z /testvol -o -z apache1 ']'
++ echo /mnt/gluster
++ grep '^\-o'
+ grep_ret=
+ '[' x '!=' x ']'
+ '[' -z /mnt/gluster -o '!' -d /mnt/gluster ']'
+ grep -q '[[:space:]+]/mnt/gluster[[:space:]+]fuse' /proc/mounts
+ case $volume_id in
+ check_recursive_mount /mnt/gluster
+ '[' /mnt/gluster = / ']'
+ mnt_dir=/mnt/gluster
+ '[' -n '' ']'
+ GLUSTERD_WORKDIR=/var/lib/glusterd
+ ls -L /var/lib/glusterd/vols/testvol/bricks/apache1:-brick
/var/lib/glusterd/vols/testvol/bricks/apache2:-brick
+ '[' 0 -ne 0 ']'
++ grep '^path' /var/lib/glusterd/vols/testvol/bricks/apache1:-brick
/var/lib/glusterd/vols/testvol/bricks/apache2:-brick
++ cut -d = -f 2
+ brick_path='/brick
/brick'
++ stat -c %i -L /
+ root_inode=128
++ stat -c %d -L /
+ root_dev=64513
++ stat -c %i -L /mnt/gluster
+ mnt_inode=101729533
++ stat -c %d -L /mnt/gluster
+ mnt_dev=64513
+ for brick in '"$brick_path"'
+ ls /brick /brick
+ '[' 0 -ne 0 ']'
+ '[' -n '' ']'
+ continue
+ update_updatedb
+ test -f /etc/updatedb.conf
+ start_glusterfs
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -z '' ']'
+ '[' -n apache1 ']'
++ parse_volfile_servers apache1
++ local server_list=apache1
++ local servers=
++ local new_servers=
+++ echo apache1
+++ sed 's/,/ /g'
++ servers=apache1
++ for server in '${servers}'
++ is_valid_hostname apache1
++ local server=apache1
+++ echo apache1
+++ wc -c
++ length=8
++ '[' 8 -gt 64 ']'
++ '[' 0 -eq 1 ']'
+++ echo ' apache1'
++ new_servers=' apache1'
++ echo apache1
+ servers=apache1
+ '[' -n apache1 ']'
++ echo apache1
+ for i in '$(echo ${servers})'
++ echo '/usr/sbin/glusterfs --volfile-server=apache1'
+ cmd_line='/usr/sbin/glusterfs --volfile-server=apache1'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n '' ']'
+ '[' -n /testvol ']'
+ '[' -n '' ']'
++ echo '/usr/sbin/glusterfs --volfile-server=apache1 --volfile-id=/testvol'
+ cmd_line='/usr/sbin/glusterfs --volfile-server=apache1
--volfile-id=/testvol'
+ '[' -n '' ']'
++ echo '/usr/sbin/glusterfs --volfile-server=apache1
--volfile-id=/testvol /mnt/gluster'
+ cmd_line='/usr/sbin/glusterfs --volfile-server=apache1
--volfile-id=/testvol /mnt/gluster'
+ /usr/sbin/glusterfs --volfile-server=apache1 --volfile-id=/testvol
/mnt/gluster
+ '[' 0 -ne 0 ']'
++ stat -c %i /mnt/gluster
+ inode=
+ '[' 1 -ne 0 ']'
+ warn 'Mount failed. Please check the log file for more details.'
+ echo 'Mount failed. Please check the log file for more details.'
Mount failed. Please check the log file for more details.
+ umount /mnt/gluster

[***@apache2 ~]# which mount.glusterfs
/usr/sbin/mount.glusterfs

[***@apache2 ~]# yum provides /usr/sbin/mount.glusterfs
glusterfs-fuse-3.6.2-1.el7.x86_64 : Fuse client
Repo : @glusterfs-epel
Matched from:
Filename : /usr/sbin/mount.glusterfs
---

--
Z poważaniem,

*Bartłomiej Syryjczyk*
Franco Broi
2015-01-27 07:36:37 UTC
Permalink
Seems to mount and then umount it because the inode isn't 1, weird!

On Tue, 2015-01-27 at 08:29 +0100, Bartłomiej Syryjczyk wrote:
> W dniu 2015-01-27 o 07:45, Franco Broi pisze:
> > Must be something wrong with your mount.glusterfs script, you could try
> > running it with sh -x to see what command it tries to run.
> This is output:
> ---
> [***@apache2 ~]# sh -x mount.glusterfs apache1:/testvol /mnt/gluster
> + _init apache1:/testvol /mnt/gluster
> + LOG_NONE=NONE
> + LOG_CRITICAL=CRITICAL
> + LOG_ERROR=ERROR
> + LOG_WARNING=WARNING
> + LOG_INFO=INFO
> + LOG_DEBUG=DEBUG
> + LOG_TRACE=TRACE
> + HOST_NAME_MAX=64
> + prefix=/usr
> + exec_prefix=/usr
> ++ echo /usr/sbin/glusterfs
> + cmd_line=/usr/sbin/glusterfs
> + export PATH
> ++ which getfattr
> + getfattr=
> + '[' 1 -ne 0 ']'
> + warn 'WARNING: getfattr not found, certain checks will be skipped..'
> + echo 'WARNING: getfattr not found, certain checks will be skipped..'
> WARNING: getfattr not found, certain checks will be skipped..
> + mounttab=/proc/mounts
> ++ uname -s
> + uname_s=Linux
> + case ${uname_s} in
> + getinode='stat -c %i'
> + getdev='stat -c %d'
> + lgetinode='stat -c %i -L'
> + lgetdev='stat -c %d -L'
> + UPDATEDBCONF=/etc/updatedb.conf
> + main apache1:/testvol /mnt/gluster
> + '[' xLinux = xLinux ']'
> + volfile_loc=apache1:/testvol
> + mount_point=/mnt/gluster
> + shift 2
> + getopts Vo:hn opt
> + '[' xLinux = xNetBSD ']'
> + '[' -r apache1:/testvol ']'
> ++ echo apache1:/testvol
> ++ sed -n 's/\([a-zA-Z0-9:.\-]*\):.*/\1/p'
> + server_ip=apache1
> ++ echo apache1:/testvol
> ++ sed -n 's/.*:\([^ ]*\).*/\1/p'
> + volume_str=/testvol
> + '[' -n /testvol ']'
> + volume_id=/testvol
> + volfile_loc=
> + '[' -z /testvol -o -z apache1 ']'
> ++ echo /mnt/gluster
> ++ grep '^\-o'
> + grep_ret=
> + '[' x '!=' x ']'
> + '[' -z /mnt/gluster -o '!' -d /mnt/gluster ']'
> + grep -q '[[:space:]+]/mnt/gluster[[:space:]+]fuse' /proc/mounts
> + case $volume_id in
> + check_recursive_mount /mnt/gluster
> + '[' /mnt/gluster = / ']'
> + mnt_dir=/mnt/gluster
> + '[' -n '' ']'
> + GLUSTERD_WORKDIR=/var/lib/glusterd
> + ls -L /var/lib/glusterd/vols/testvol/bricks/apache1:-brick
> /var/lib/glusterd/vols/testvol/bricks/apache2:-brick
> + '[' 0 -ne 0 ']'
> ++ grep '^path' /var/lib/glusterd/vols/testvol/bricks/apache1:-brick
> /var/lib/glusterd/vols/testvol/bricks/apache2:-brick
> ++ cut -d = -f 2
> + brick_path='/brick
> /brick'
> ++ stat -c %i -L /
> + root_inode=128
> ++ stat -c %d -L /
> + root_dev=64513
> ++ stat -c %i -L /mnt/gluster
> + mnt_inode=101729533
> ++ stat -c %d -L /mnt/gluster
> + mnt_dev=64513
> + for brick in '"$brick_path"'
> + ls /brick /brick
> + '[' 0 -ne 0 ']'
> + '[' -n '' ']'
> + continue
> + update_updatedb
> + test -f /etc/updatedb.conf
> + start_glusterfs
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -z '' ']'
> + '[' -n apache1 ']'
> ++ parse_volfile_servers apache1
> ++ local server_list=apache1
> ++ local servers=
> ++ local new_servers=
> +++ echo apache1
> +++ sed 's/,/ /g'
> ++ servers=apache1
> ++ for server in '${servers}'
> ++ is_valid_hostname apache1
> ++ local server=apache1
> +++ echo apache1
> +++ wc -c
> ++ length=8
> ++ '[' 8 -gt 64 ']'
> ++ '[' 0 -eq 1 ']'
> +++ echo ' apache1'
> ++ new_servers=' apache1'
> ++ echo apache1
> + servers=apache1
> + '[' -n apache1 ']'
> ++ echo apache1
> + for i in '$(echo ${servers})'
> ++ echo '/usr/sbin/glusterfs --volfile-server=apache1'
> + cmd_line='/usr/sbin/glusterfs --volfile-server=apache1'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n '' ']'
> + '[' -n /testvol ']'
> + '[' -n '' ']'
> ++ echo '/usr/sbin/glusterfs --volfile-server=apache1 --volfile-id=/testvol'
> + cmd_line='/usr/sbin/glusterfs --volfile-server=apache1
> --volfile-id=/testvol'
> + '[' -n '' ']'
> ++ echo '/usr/sbin/glusterfs --volfile-server=apache1
> --volfile-id=/testvol /mnt/gluster'
> + cmd_line='/usr/sbin/glusterfs --volfile-server=apache1
> --volfile-id=/testvol /mnt/gluster'
> + /usr/sbin/glusterfs --volfile-server=apache1 --volfile-id=/testvol
> /mnt/gluster
> + '[' 0 -ne 0 ']'
> ++ stat -c %i /mnt/gluster
> + inode=
> + '[' 1 -ne 0 ']'
> + warn 'Mount failed. Please check the log file for more details.'
> + echo 'Mount failed. Please check the log file for more details.'
> Mount failed. Please check the log file for more details.
> + umount /mnt/gluster
>
> [***@apache2 ~]# which mount.glusterfs
> /usr/sbin/mount.glusterfs
>
> [***@apache2 ~]# yum provides /usr/sbin/mount.glusterfs
> glusterfs-fuse-3.6.2-1.el7.x86_64 : Fuse client
> Repo : @glusterfs-epel
> Matched from:
> Filename : /usr/sbin/mount.glusterfs
> ---
>
Bartłomiej Syryjczyk
2015-01-27 07:43:22 UTC
Permalink
W dniu 2015-01-27 o 08:36, Franco Broi pisze:
> Seems to mount and then umount it because the inode isn't 1, weird!
>
>
Why it must be 1? Are you sure it's inode, not exit code?
Can you check your system?
---
[***@apache2 ~]# stat -c %i /brick/
68990719
[***@apache2 ~]# stat -c %i /mnt/gluster
101729533
[***@apache2 ~]# stat -c %i /tmp
100663425
[***@apache2 ~]# stat -c %i /root
67153025
---

--
Z poważaniem,

*Bartłomiej Syryjczyk*
Franco Broi
2015-01-27 07:46:28 UTC
Permalink
[***@charlie4 ~]$ stat -c %i /data
1
[***@charlie4 ~]$ stat -c %i /data2
1

Can you check that running the mount manually actually did work, ie can
you list the files after mounting manually??

/usr/sbin/glusterfs --volfile-server=apache1 --volfile-id=/testvol /mnt/gluster

On Tue, 2015-01-27 at 08:43 +0100, Bartłomiej Syryjczyk wrote:
> W dniu 2015-01-27 o 08:36, Franco Broi pisze:
> > Seems to mount and then umount it because the inode isn't 1, weird!
> >
> >
> Why it must be 1? Are you sure it's inode, not exit code?
> Can you check your system?
> ---
> [***@apache2 ~]# stat -c %i /brick/
> 68990719
> [***@apache2 ~]# stat -c %i /mnt/gluster
> 101729533
> [***@apache2 ~]# stat -c %i /tmp
> 100663425
> [***@apache2 ~]# stat -c %i /root
> 67153025
> ---
>
Bartłomiej Syryjczyk
2015-01-27 07:52:30 UTC
Permalink
W dniu 2015-01-27 o 08:46, Franco Broi pisze:
> [***@charlie4 ~]$ stat -c %i /data
> 1
> [***@charlie4 ~]$ stat -c %i /data2
> 1
>
> Can you check that running the mount manually actually did work, ie can
> you list the files after mounting manually??
>
> /usr/sbin/glusterfs --volfile-server=apache1 --volfile-id=/testvol /mnt/gluster
Works fine
---
[***@apache2 ~]# /usr/sbin/glusterfs --volfile-server=apache1
--volfile-id=/testvol /mnt/gluster

[***@apache2 ~]# ls -l /brick /mnt/gluster
/brick:
total 1355800
-rw-r--r-- 2 root root 1152 Jan 26 08:04 passwd
-rw-r--r-- 2 root root 104857600 Jan 26 11:49 zero-2.dd
-rw-r--r-- 2 root root 104857600 Jan 26 11:50 zero-3.dd
-rw-r--r-- 2 root root 104857600 Jan 26 11:52 zero-4.dd
-rw-r--r-- 2 root root 1073741824 Jan 26 08:26 zero.dd

/mnt/gluster:
total 1355778
-rw-r--r-- 1 root root 1152 Jan 26 08:03 passwd
-rw-r--r-- 1 root root 104857600 Jan 26 11:50 zero-2.dd
-rw-r--r-- 1 root root 104857600 Jan 26 11:50 zero-3.dd
-rw-r--r-- 1 root root 104857600 Jan 26 11:52 zero-4.dd
-rw-r--r-- 1 root root 1073741824 Jan 26 08:25 zero.dd

[***@apache2 ~]# umount /mnt/gluster/

[***@apache2 ~]# ls -l /mnt/gluster
total 0
---

--
Z poważaniem,

*Bartłomiej Syryjczyk*
Franco Broi
2015-01-27 08:00:40 UTC
Permalink
Your getinode isn't working...

+ '[' 0 -ne 0 ']'
++ stat -c %i /mnt/gluster
+ inode=
+ '[' 1 -ne 0 ']'

How old is your mount.glusterfs script?

On Tue, 2015-01-27 at 08:52 +0100, Bartłomiej Syryjczyk wrote:
> W dniu 2015-01-27 o 08:46, Franco Broi pisze:
> > [***@charlie4 ~]$ stat -c %i /data
> > 1
> > [***@charlie4 ~]$ stat -c %i /data2
> > 1
> >
> > Can you check that running the mount manually actually did work, ie can
> > you list the files after mounting manually??
> >
> > /usr/sbin/glusterfs --volfile-server=apache1 --volfile-id=/testvol /mnt/gluster
> Works fine
> ---
> [***@apache2 ~]# /usr/sbin/glusterfs --volfile-server=apache1
> --volfile-id=/testvol /mnt/gluster
>
> [***@apache2 ~]# ls -l /brick /mnt/gluster
> /brick:
> total 1355800
> -rw-r--r-- 2 root root 1152 Jan 26 08:04 passwd
> -rw-r--r-- 2 root root 104857600 Jan 26 11:49 zero-2.dd
> -rw-r--r-- 2 root root 104857600 Jan 26 11:50 zero-3.dd
> -rw-r--r-- 2 root root 104857600 Jan 26 11:52 zero-4.dd
> -rw-r--r-- 2 root root 1073741824 Jan 26 08:26 zero.dd
>
> /mnt/gluster:
> total 1355778
> -rw-r--r-- 1 root root 1152 Jan 26 08:03 passwd
> -rw-r--r-- 1 root root 104857600 Jan 26 11:50 zero-2.dd
> -rw-r--r-- 1 root root 104857600 Jan 26 11:50 zero-3.dd
> -rw-r--r-- 1 root root 104857600 Jan 26 11:52 zero-4.dd
> -rw-r--r-- 1 root root 1073741824 Jan 26 08:25 zero.dd
>
> [***@apache2 ~]# umount /mnt/gluster/
>
> [***@apache2 ~]# ls -l /mnt/gluster
> total 0
> ---
>
Bartłomiej Syryjczyk
2015-01-27 08:05:03 UTC
Permalink
W dniu 2015-01-27 o 09:00, Franco Broi pisze:
> Your getinode isn't working...
>
> + '[' 0 -ne 0 ']'
> ++ stat -c %i /mnt/gluster
> + inode=
> + '[' 1 -ne 0 ']'
>
> How old is your mount.glusterfs script?
It's fresh (I think so). It's from official repo:
---
[***@apache2 ~]# which mount.glusterfs
/usr/sbin/mount.glusterfs

[***@apache2 ~]# ls -l /usr/sbin/mount.glusterfs
-rwxr-xr-x 1 root root 16908 Jan 22 14:00 /usr/sbin/mount.glusterfs

[***@apache2 ~]# yum provides /usr/sbin/mount.glusterfs
glusterfs-fuse-3.6.2-1.el7.x86_64 : Fuse client
Repo : @glusterfs-epel
Matched from:
Filename : /usr/sbin/mount.glusterfs

[***@apache2 ~]# md5sum /usr/sbin/mount.glusterfs
a302f984367c93fd94ab3ad73386e66a /usr/sbin/mount.glusterfs
---

--
Z poważaniem,

*Bartłomiej Syryjczyk*
Franco Broi
2015-01-27 08:08:09 UTC
Permalink
So what is the inode of your mounted gluster filesystem? And does
running 'mount' show it as being fuse.glusterfs?

On Tue, 2015-01-27 at 09:05 +0100, Bartłomiej Syryjczyk wrote:
> W dniu 2015-01-27 o 09:00, Franco Broi pisze:
> > Your getinode isn't working...
> >
> > + '[' 0 -ne 0 ']'
> > ++ stat -c %i /mnt/gluster
> > + inode=
> > + '[' 1 -ne 0 ']'
> >
> > How old is your mount.glusterfs script?
> It's fresh (I think so). It's from official repo:
> ---
> [***@apache2 ~]# which mount.glusterfs
> /usr/sbin/mount.glusterfs
>
> [***@apache2 ~]# ls -l /usr/sbin/mount.glusterfs
> -rwxr-xr-x 1 root root 16908 Jan 22 14:00 /usr/sbin/mount.glusterfs
>
> [***@apache2 ~]# yum provides /usr/sbin/mount.glusterfs
> glusterfs-fuse-3.6.2-1.el7.x86_64 : Fuse client
> Repo : @glusterfs-epel
> Matched from:
> Filename : /usr/sbin/mount.glusterfs
>
> [***@apache2 ~]# md5sum /usr/sbin/mount.glusterfs
> a302f984367c93fd94ab3ad73386e66a /usr/sbin/mount.glusterfs
> ---
>
Bartłomiej Syryjczyk
2015-01-27 08:12:37 UTC
Permalink
W dniu 2015-01-27 o 09:08, Franco Broi pisze:
> So what is the inode of your mounted gluster filesystem? And does
> running 'mount' show it as being fuse.glusterfs?
---
[***@apache2 ~]# stat -c %i /mnt/gluster
101729533

[***@apache2 ~]# /usr/sbin/glusterfs --volfile-server=apache1
--volfile-id=/testvol /mnt/gluster

[***@apache2 ~]# stat -c %i /mnt/gluster
1

[***@apache2 ~]# mount|grep fuse
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
apache1:/testvol on /mnt/gluster type fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
---

--
Z poważaniem,

*Bartłomiej Syryjczyk*
Franco Broi
2015-01-27 08:20:26 UTC
Permalink
Well I'm stumped, just seems like the mount.glusterfs script isn't
working. I'm still running 3.5.1 and the getinode bit of my script looks
like this:

...
Linux)
getinode="stat -c %i $i"

...
inode=$( ${getinode} $mount_point 2>/dev/null);

# this is required if the stat returns error
if [ -z "$inode" ]; then
inode="0";
fi

if [ $inode -ne 1 ]; then
err=1;
fi



On Tue, 2015-01-27 at 09:12 +0100, Bartłomiej Syryjczyk wrote:
> W dniu 2015-01-27 o 09:08, Franco Broi pisze:
> > So what is the inode of your mounted gluster filesystem? And does
> > running 'mount' show it as being fuse.glusterfs?
> ---
> [***@apache2 ~]# stat -c %i /mnt/gluster
> 101729533
>
> [***@apache2 ~]# /usr/sbin/glusterfs --volfile-server=apache1
> --volfile-id=/testvol /mnt/gluster
>
> [***@apache2 ~]# stat -c %i /mnt/gluster
> 1
>
> [***@apache2 ~]# mount|grep fuse
> fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
> apache1:/testvol on /mnt/gluster type fuse.glusterfs
> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
> ---
>
Franco Broi
2015-01-27 08:43:40 UTC
Permalink
Could this be a case of Oracle Linux being evil?

On Tue, 2015-01-27 at 16:20 +0800, Franco Broi wrote:
> Well I'm stumped, just seems like the mount.glusterfs script isn't
> working. I'm still running 3.5.1 and the getinode bit of my script looks
> like this:
>
> ...
> Linux)
> getinode="stat -c %i $i"
>
> ...
> inode=$( ${getinode} $mount_point 2>/dev/null);
>
> # this is required if the stat returns error
> if [ -z "$inode" ]; then
> inode="0";
> fi
>
> if [ $inode -ne 1 ]; then
> err=1;
> fi
>
>
>
> On Tue, 2015-01-27 at 09:12 +0100, Bartłomiej Syryjczyk wrote:
> > W dniu 2015-01-27 o 09:08, Franco Broi pisze:
> > > So what is the inode of your mounted gluster filesystem? And does
> > > running 'mount' show it as being fuse.glusterfs?
> > ---
> > [***@apache2 ~]# stat -c %i /mnt/gluster
> > 101729533
> >
> > [***@apache2 ~]# /usr/sbin/glusterfs --volfile-server=apache1
> > --volfile-id=/testvol /mnt/gluster
> >
> > [***@apache2 ~]# stat -c %i /mnt/gluster
> > 1
> >
> > [***@apache2 ~]# mount|grep fuse
> > fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
> > apache1:/testvol on /mnt/gluster type fuse.glusterfs
> > (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
> > ---
> >
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-***@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
Bartłomiej Syryjczyk
2015-01-27 10:27:12 UTC
Permalink
W dniu 2015-01-27 o 09:43, Franco Broi pisze:
> Could this be a case of Oracle Linux being evil?
Yes, I'm using OEL. Now 7.0, earlier 6.6

--
Z poważaniem,

*Bartłomiej Syryjczyk*
Bartłomiej Syryjczyk
2015-01-27 08:47:23 UTC
Permalink
W dniu 2015-01-27 o 09:20, Franco Broi pisze:
> Well I'm stumped, just seems like the mount.glusterfs script isn't
> working. I'm still running 3.5.1 and the getinode bit of my script looks
> like this:
>
> ...
> Linux)
> getinode="stat -c %i $i"
>
> ...
> inode=$( ${getinode} $mount_point 2>/dev/null);
>
> # this is required if the stat returns error
> if [ -z "$inode" ]; then
> inode="0";
> fi
>
> if [ $inode -ne 1 ]; then
> err=1;
> fi
(My) script should check return code, not inode. There is right comment
about that. Or maybe I don't understand construction in 298 line:
---
[...]
49 Linux)
50 getinode="stat -c %i"
[...]
298 inode=$( ${getinode} $mount_point 2>/dev/null);
299 # this is required if the stat returns error
300 if [ $? -ne 0 ]; then
301 warn "Mount failed. Please check the log file for more details."
302 umount $mount_point > /dev/null 2>&1;
303 exit 1;
304 fi
---

When I paste between lines 298 and 300 something with 0 exit code, eg.
echo $?;
it works.

With script from 3.6.1 package there was the same problem.

--
Z poważaniem,

*Bartłomiej Syryjczyk*
Franco Broi
2015-01-27 08:53:00 UTC
Permalink
What do you get is you do this?

bash-4.1# stat -c %i /mnt/gluster
1
-bash-4.1# echo $?
0


On Tue, 2015-01-27 at 09:47 +0100, Bartłomiej Syryjczyk wrote:
> W dniu 2015-01-27 o 09:20, Franco Broi pisze:
> > Well I'm stumped, just seems like the mount.glusterfs script isn't
> > working. I'm still running 3.5.1 and the getinode bit of my script looks
> > like this:
> >
> > ...
> > Linux)
> > getinode="stat -c %i $i"
> >
> > ...
> > inode=$( ${getinode} $mount_point 2>/dev/null);
> >
> > # this is required if the stat returns error
> > if [ -z "$inode" ]; then
> > inode="0";
> > fi
> >
> > if [ $inode -ne 1 ]; then
> > err=1;
> > fi
> (My) script should check return code, not inode. There is right comment
> about that. Or maybe I don't understand construction in 298 line:
> ---
> [...]
> 49 Linux)
> 50 getinode="stat -c %i"
> [...]
> 298 inode=$( ${getinode} $mount_point 2>/dev/null);
> 299 # this is required if the stat returns error
> 300 if [ $? -ne 0 ]; then
> 301 warn "Mount failed. Please check the log file for more details."
> 302 umount $mount_point > /dev/null 2>&1;
> 303 exit 1;
> 304 fi
> ---
>
> When I paste between lines 298 and 300 something with 0 exit code, eg.
> echo $?;
> it works.
>
> With script from 3.6.1 package there was the same problem.
>
Bartłomiej Syryjczyk
2015-01-27 10:29:19 UTC
Permalink
W dniu 2015-01-27 o 09:53, Franco Broi pisze:
> What do you get is you do this?
>
> bash-4.1# stat -c %i /mnt/gluster
> 1
> -bash-4.1# echo $?
> 0
---
[***@apache2 ~]# /usr/sbin/glusterfs --volfile-server=apache1
--volfile-id=/testvol /mnt/gluster

[***@apache2 ~]# stat -c %i /mnt/gluster
1

[***@apache2 ~]# echo $?
0

[***@apache2 ~]# umount /mnt/gluster/

[***@apache2 ~]# stat -c %i /mnt/gluster
101729533

[***@apache2 ~]# echo $?
0
---

--
Z poważaniem,

*Bartłomiej Syryjczyk*
Bartłomiej Syryjczyk
2015-01-27 13:09:12 UTC
Permalink
W dniu 2015-01-27 o 09:47, Bartłomiej Syryjczyk pisze:
> W dniu 2015-01-27 o 09:20, Franco Broi pisze:
>> Well I'm stumped, just seems like the mount.glusterfs script isn't
>> working. I'm still running 3.5.1 and the getinode bit of my script looks
>> like this:
>>
>> ...
>> Linux)
>> getinode="stat -c %i $i"
>>
>> ...
>> inode=$( ${getinode} $mount_point 2>/dev/null);
>>
>> # this is required if the stat returns error
>> if [ -z "$inode" ]; then
>> inode="0";
>> fi
>>
>> if [ $inode -ne 1 ]; then
>> err=1;
>> fi
> (My) script should check return code, not inode. There is right comment
> about that. Or maybe I don't understand construction in 298 line:
> ---
> [...]
> 49 Linux)
> 50 getinode="stat -c %i"
> [...]
> 298 inode=$( ${getinode} $mount_point 2>/dev/null);
> 299 # this is required if the stat returns error
> 300 if [ $? -ne 0 ]; then
> 301 warn "Mount failed. Please check the log file for more details."
> 302 umount $mount_point > /dev/null 2>&1;
> 303 exit 1;
> 304 fi
> ---
>
> When I paste between lines 298 and 300 something with 0 exit code, eg.
> echo $?;
> it works.
>
> With script from 3.6.1 package there was the same problem.
OK, I removed part "2>/dev/null", and see:
stat: cannot stat ‘/mnt/gluster’: Resource temporarily unavailable

So I decided to add sleep just before line number 298 (this one with
stat). And it works! Is it normal?

--
Z poważaniem,

*Bartłomiej Syryjczyk*
Franco Broi
2015-01-27 23:45:14 UTC
Permalink
On Tue, 2015-01-27 at 14:09 +0100, Bartłomiej Syryjczyk wrote:

> OK, I removed part "2>/dev/null", and see:
> stat: cannot stat ‘/mnt/gluster’: Resource temporarily unavailable
>
> So I decided to add sleep just before line number 298 (this one with
> stat). And it works! Is it normal?
>

Glad that you found the problem. I wouldn't have thought that glusterfs
should return before the filesystem is properly mounted, you should file
a bug.

https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS

Cheers,
Bartłomiej Syryjczyk
2015-01-28 06:33:50 UTC
Permalink
W dniu 2015-01-28 o 00:45, Franco Broi pisze:
> Glad that you found the problem. I wouldn't have thought that glusterfs
> should return before the filesystem is properly mounted, you should file
> a bug.
>
> https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
Ugh! This is known problem...
https://bugzilla.redhat.com/show_bug.cgi?id=1151696

--
Z poważaniem,

*Bartłomiej Syryjczyk*
Loading...