Discussion:
[Gluster-users] Kernel NFS on GlusterFS
Ben Mason
2018-03-07 19:50:12 UTC
Permalink
Hello,

I'm designing a 2-node, HA NAS that must support NFS. I had planned on
using GlusterFS native NFS until I saw that it is being deprecated. Then, I
was going to use GlusterFS + NFS-Ganesha until I saw that the Ganesha HA
support ended after 3.10 and its replacement is still a WIP. So, I landed
on GlusterFS + kernel NFS + corosync & pacemaker, which seems to work quite
well. Are there any performance issues or other concerns with using
GlusterFS as a replication layer and kernel NFS on top of that?

Thanks!
Jim Kinney
2018-03-07 22:47:12 UTC
Permalink
Gluster does the sync part better than corosync. It's not an
active/passive failover system. It more all active. Gluster handles the
recovery once all nodes are back online.
That requires the client tool chain to understand that a write goes to
all storage devices not just the active one.
3.10 is a long term support release. Upgrading to 3.12 or 4 is not a
significant issue once a replacement for NFS-ganesha stabilizes.
Kernel NFS doesn't understand "write to two IP addresses". That's what
NFS-Ganesha does. The gluster-fuse client works but is slower than most
people like. I use the fuse process in my setup at work. Will be
changing to NFS-Ganesha as part of the upgrade to 3.10.
Post by Ben Mason
Hello,
I'm designing a 2-node, HA NAS that must support NFS. I had planned
on using GlusterFS native NFS until I saw that it is being
deprecated. Then, I was going to use GlusterFS + NFS-Ganesha until I
saw that the Ganesha HA support ended after 3.10 and its replacement
is still a WIP. So, I landed on GlusterFS + kernel NFS + corosync &
pacemaker, which seems to work quite well. Are there any performance
issues or other concerns with using GlusterFS as a replication layer
and kernel NFS on top of that?
Thanks!
_______________________________________________
Gluster-users mailing list
http://lists.gluster.org/mailman/listinfo/gluster-users
--
James P. Kinney III

Every time you stop a school, you will have to build a jail. What you
gain at one end you lose at the other. It's like feeding a dog on his
own tail. It won't fatten the dog.
- Speech 11/23/1900 Mark Twain

http://heretothereideas.blogspot.com/
Ondrej Valousek
2018-03-08 07:57:08 UTC
Permalink
You say that accessing Gluster via NFS is actually faster than native (fuse) client?
Still I would like to know why we can’t use kernel NFS server on the data bricks. I understand we can’t use it on MDS as it can’t support pNFS.

Ondrej

From: gluster-users-***@gluster.org [mailto:gluster-users-***@gluster.org] On Behalf Of Jim Kinney
Sent: Wednesday, March 07, 2018 11:47 PM
To: gluster-***@gluster.org
Subject: Re: [Gluster-users] Kernel NFS on GlusterFS

Gluster does the sync part better than corosync. It's not an active/passive failover system. It more all active. Gluster handles the recovery once all nodes are back online.

That requires the client tool chain to understand that a write goes to all storage devices not just the active one.

3.10 is a long term support release. Upgrading to 3.12 or 4 is not a significant issue once a replacement for NFS-ganesha stabilizes.

Kernel NFS doesn't understand "write to two IP addresses". That's what NFS-Ganesha does. The gluster-fuse client works but is slower than most people like. I use the fuse process in my setup at work. Will be changing to NFS-Ganesha as part of the upgrade to 3.10.

On Wed, 2018-03-07 at 14:50 -0500, Ben Mason wrote:
Hello,

I'm designing a 2-node, HA NAS that must support NFS. I had planned on using GlusterFS native NFS until I saw that it is being deprecated. Then, I was going to use GlusterFS + NFS-Ganesha until I saw that the Ganesha HA support ended after 3.10 and its replacement is still a WIP. So, I landed on GlusterFS + kernel NFS + corosync & pacemaker, which seems to work quite well. Are there any performance issues or other concerns with using GlusterFS as a replication layer and kernel NFS on top of that?

Thanks!

_______________________________________________

Gluster-users mailing list

Gluster-***@gluster.org<mailto:Gluster-***@gluster.org>

http://lists.gluster.org/mailman/listinfo/gluster-users
--
James P. Kinney III



Every time you stop a school, you will have to build a jail. What you

gain at one end you lose at the other. It's like feeding a dog on his

own tail. It won't fatten the dog.

- Speech 11/23/1900 Mark Twain



http://heretothereideas.blogspot.com/

-----

The information contained in this e-mail and in any attachments is confidential and is designated solely for the attention of the intended recipient(s). If you are not an intended recipient, you must not use, disclose, copy, distribute or retain this e-mail or any part thereof. If you have received this e-mail in error, please notify the sender by return e-mail and delete all copies of this e-mail from your computer system(s). Please direct any additional queries to: ***@s3group.com. Thank You. Silicon and Software Systems Limited (S3 Group). Registered in Ireland no. 378073. Registered Office: South County Business Park, Leopardstown, Dublin 18.
Joe Julian
2018-03-08 18:06:51 UTC
Permalink
[snip].
The gluster-fuse client works but is slower than most people like. I
use the fuse process in my setup at work. ...
Depending on the use case and configuration. With client-side caching
and cache invalidation, a good number of the performance complaints can
be addressed in a similar (better) way to how nfs makes things fast.
Post by Ben Mason
Hello,
I'm designing a 2-node, HA NAS that must support NFS. I had planned
on using GlusterFS native NFS until I saw that it is being
deprecated. Then, I was going to use GlusterFS + NFS-Ganesha until I
saw that the Ganesha HA support ended after 3.10 and its replacement
is still a WIP. So, I landed on GlusterFS + kernel NFS + corosync &
pacemaker, which seems to work quite well. Are there any performance
issues or other concerns with using GlusterFS as a replication layer
and kernel NFS on top of that?
Thanks!
_______________________________________________
Gluster-users mailing list
http://lists.gluster.org/mailman/listinfo/gluster-users
--
James P. Kinney III
Every time you stop a school, you will have to build a jail. What you
gain at one end you lose at the other. It's like feeding a dog on his
own tail. It won't fatten the dog.
- Speech 11/23/1900 Mark Twain
http://heretothereideas.blogspot.com/
_______________________________________________
Gluster-users mailing list
http://lists.gluster.org/mailman/listinfo/gluster-users
Joe Julian
2018-03-08 18:03:26 UTC
Permalink
There has been a deadlock problem in the past where both the knfs module
and the fuse module each need more memory to satisfy a fop and neither
can acquire that memory due to competing locks. This caused an infinite
wait. Not sure if anything was ever done in the kernel to remedy that.
Post by Ben Mason
Hello,
I'm designing a 2-node, HA NAS that must support NFS. I had planned on
using GlusterFS native NFS until I saw that it is being deprecated.
Then, I was going to use GlusterFS + NFS-Ganesha until I saw that the
Ganesha HA support ended after 3.10 and its replacement is still a
WIP. So, I landed on GlusterFS + kernel NFS + corosync & pacemaker,
which seems to work quite well. Are there any performance issues or
other concerns with using GlusterFS as a replication layer and kernel
NFS on top of that?
Thanks!
_______________________________________________
Gluster-users mailing list
http://lists.gluster.org/mailman/listinfo/gluster-users
Loading...