Discussion:
[Gluster-users] glusterfs and pacemaker
Uwe Weiss
2011-07-15 08:36:49 UTC
Permalink
Hello List,



I have a running cluster (two nodes) running opanais , pacemaker and
glusterfs.



Currently glusterfs is manually started and mounted.



Now I would like to integrate glusterfs into this pacemaker environment but
I can't find a glusterfs resource agent for pacemaker.



Has anyone an idea how to integrate glusterfs into pacemaker?



My idea is, that pacemaker starts and monitors the glusterfs mountpoints and
migrates some resources to the remaining node if one or more mountpoint(s)
fails.



Thx in advance

ewuewu
Marcel Pennewiß
2011-07-15 11:12:02 UTC
Permalink
Post by Uwe Weiss
Hello List,
Hi Uwe,
Post by Uwe Weiss
I have a running cluster (two nodes) running opanais , pacemaker and
glusterfs.
Same here ...
Post by Uwe Weiss
Now I would like to integrate glusterfs into this pacemaker environment but
I can't find a glusterfs resource agent for pacemaker.
We use init-script (lsb:glusterfs) to integrate glusterfs-daemons.
Post by Uwe Weiss
My idea is, that pacemaker starts and monitors the glusterfs mountpoints
and migrates some resources to the remaining node if one or more
mountpoint(s) fails.
For using mountpoints, please have a look at OCF Filesystem agent.

Marcel
Marcel Pennewiß
2011-07-17 15:36:31 UTC
Permalink
Post by Marcel Pennewiß
Post by Uwe Weiss
My idea is, that pacemaker starts and monitors the glusterfs mountpoints
and migrates some resources to the remaining node if one or more
mountpoint(s) fails.
For using mountpoints, please have a look at OCF Filesystem agent.
Uwe informed me (via PM) that this didn't work - we did not use this until
now. After some investigation you'll see that ocf::Filesystem did not
detect/work with glusterfs-shares :(

A few changes are necessary to create a basic support for glusterfs.
@Uwe: Please have a look at [1] and try to patch your "Filesystem"-OCF-script
(which maybe located in /usr/lib/ocf/resource.d/heartbeat).

[1] http://subversion.fem.tu-ilmenau.de/repository/fem-overlay/trunk/sys-
cluster/resource-agents/files/filesystem-glusterfs-support.patch

best regards
Marcel
Uwe Weiss
2011-07-18 10:10:36 UTC
Permalink
Hello Marcel,

thank you very much for the patch. Great Job.

It works on the first shot. Mounting and migration of the fs works. Till now
I could not test a hart reset of a cluster node, cause a colleague is
currently using the cluster.

I applied the following parameters:

Fstype: glusterfs
Mountdir: /virtfs
Glustervolume: 192.168.50.1:/gl_vol1

Maybe you can answer me a question for better understanding?

My second node is 192.168.50.2. But in the Filesystem RA I have referenced
to 192.168.50.1 (see above). During my first test node1 was up and running,
but what happens if node1 is completely away and the address is
inaccessible?

Thx
Uwe


Uwe Weiss
weiss edv-consulting
Lattenkamp 14
22299 Hamburg
Phone: +49 40 51323431
Fax: +49 40 51323437
eMail: ***@netz-objekte.de

-----Ursprüngliche Nachricht-----
Von: gluster-users-***@gluster.org
[mailto:gluster-users-***@gluster.org] Im Auftrag von Marcel Pennewiß
Gesendet: Sonntag, 17. Juli 2011 17:37
An: gluster-***@gluster.org
Betreff: Re: [Gluster-users] glusterfs and pacemaker
Post by Marcel Pennewiß
Post by Uwe Weiss
My idea is, that pacemaker starts and monitors the glusterfs
mountpoints and migrates some resources to the remaining node if
one or more
mountpoint(s) fails.
For using mountpoints, please have a look at OCF Filesystem agent.
Uwe informed me (via PM) that this didn't work - we did not use this until
now. After some investigation you'll see that ocf::Filesystem did not
detect/work with glusterfs-shares :(

A few changes are necessary to create a basic support for glusterfs.
@Uwe: Please have a look at [1] and try to patch your
"Filesystem"-OCF-script (which maybe located in
/usr/lib/ocf/resource.d/heartbeat).

[1] http://subversion.fem.tu-ilmenau.de/repository/fem-overlay/trunk/sys-
cluster/resource-agents/files/filesystem-glusterfs-support.patch

best regards
Marcel
Marcel Pennewiß
2011-07-18 11:14:41 UTC
Permalink
Post by Uwe Weiss
My second node is 192.168.50.2. But in the Filesystem RA I have referenced
to 192.168.50.1 (see above). During my first test node1 was up and running,
but what happens if node1 is completely away and the address is
inaccessible?
We're using replicated setup and both nodes share an IPv4/IPv6-address (via
pacemaker) which is used for accessing/mounting glusterfs-share and nfs-share
(from backup-server).

Marcel
samuel
2011-07-18 11:26:00 UTC
Permalink
I don't know from which version on but, if you use the native client for
mounting the volumes, it's only required to have the IP active in the mount
moment. After that, the native client will transparently manage node's
failure.

Best regards,
Samuel.
Post by Uwe Weiss
Post by Uwe Weiss
My second node is 192.168.50.2. But in the Filesystem RA I have
referenced
Post by Uwe Weiss
to 192.168.50.1 (see above). During my first test node1 was up and
running,
Post by Uwe Weiss
but what happens if node1 is completely away and the address is
inaccessible?
We're using replicated setup and both nodes share an IPv4/IPv6-address (via
pacemaker) which is used for accessing/mounting glusterfs-share and nfs-share
(from backup-server).
Marcel
_______________________________________________
Gluster-users mailing list
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Marcel Pennewiß
2011-07-18 12:00:17 UTC
Permalink
Post by samuel
I don't know from which version on but, if you use the native client for
mounting the volumes, it's only required to have the IP active in the mount
moment. After that, the native client will transparently manage node's
failure.
ACK, that's why we use this shared IP (e.g. for backup issues via nfs). AFAIR
glusterFS retrieves Volfile (via shared IP) and connects to the nodes.

Marcel
Marcel Pennewiß
2011-08-02 09:56:45 UTC
Permalink
Post by Uwe Weiss
Hello Marcel,
Hi,
Post by Uwe Weiss
thank you very much for the patch. Great Job.
the patch was accepted by upstream, so it will be included in one of the next
releases of the resource-agents.

Marcel

Loading...