Discussion:
[Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0
Amar Tumballi
2018-07-19 06:56:35 UTC
Permalink
*Hi all,Over last 12 years of Gluster, we have developed many features, and
continue to support most of it till now. But along the way, we have figured
out better methods of doing things. Also we are not actively maintaining
some of these features.We are now thinking of cleaning up some of these
‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be totally
taken out of codebase in following releases) in next upcoming release,
v5.0. The release notes will provide options for smoothly migrating to the
supported configurations.If you are using any of these features, do let us
know, so that we can help you with ‘migration’.. Also, we are happy to
guide new developers to work on those components which are not actively
being maintained by current set of developers.List of features hitting
sunset:‘cluster/stripe’ translator:This translator was developed very early
in the evolution of GlusterFS, and addressed one of the very common
question of Distributed FS, which is “What happens if one of my file is
bigger than the available brick. Say, I have 2 TB hard drive, exported in
glusterfs, my file is 3 TB”. While it solved the purpose, it was very hard
to handle failure scenarios, and give a real good experience to our users
with this feature. Over the time, Gluster solved the problem with it’s
‘Shard’ feature, which solves the problem in much better way, and provides
much better solution with existing well supported stack. Hence the proposal
for Deprecation.If you are using this feature, then do write to us, as it
needs a proper migration from existing volume to a new full supported
volume type before you upgrade.‘storage/bd’ translator:This feature got
into the code base 5 years back with this patch
<http://review.gluster.org/4809>[1]. Plan was to use a block device
directly as a brick, which would help to handle disk-image storage much
easily in glusterfs.As the feature is not getting more contribution, and we
are not seeing any user traction on this, would like to propose for
Deprecation.If you are using the feature, plan to move to a supported
gluster volume configuration, and have your setup ‘supported’ before
upgrading to your new gluster version.‘RDMA’ transport support:Gluster
started supporting RDMA while ib-verbs was still new, and very high-end
infra around that time were using Infiniband. Engineers did work with
Mellanox, and got the technology into GlusterFS for better data migration,
data copy. While current day kernels support very good speed with IPoIB
module itself, and there are no more bandwidth for experts in these area to
maintain the feature, we recommend migrating over to TCP (IP based) network
for your volume.If you are successfully using RDMA transport, do get in
touch with us to prioritize the migration plan for your volume. Plan is to
work on this after the release, so by version 6.0, we will have a cleaner
transport code, which just needs to support one type.‘Tiering’
featureGluster’s tiering feature which was planned to be providing an
option to keep your ‘hot’ data in different location than your cold data,
so one can get better performance. While we saw some users for the feature,
it needs much more attention to be completely bug free. At the time, we are
not having any active maintainers for the feature, and hence suggesting to
take it out of the ‘supported’ tag.If you are willing to take it up, and
maintain it, do let us know, and we are happy to assist you.If you are
already using tiering feature, before upgrading, make sure to do gluster
volume tier detach all the bricks before upgrading to next release. Also,
we recommend you to use features like dmcache on your LVM setup to get best
performance from bricks.‘Quota’This is a call out for ‘Quota’ feature, to
let you all know that it will be ‘no new development’ state. While this
feature is ‘actively’ in use by many people, the challenges we have in
accounting mechanisms involved, has made it hard to achieve good
performance with the feature. Also, the amount of extended attribute
get/set operations while using the feature is not very ideal. Hence we
recommend our users to move towards setting quota on backend bricks
directly (ie, XFS project quota), or to use different volumes for different
directories etc.As the feature wouldn’t be deprecated immediately, the
feature doesn’t need a migration plan when you upgrade to newer version,
but if you are a new user, we wouldn’t recommend setting quota feature. By
the release dates, we will be publishing our best alternatives guide for
gluster’s current quota feature.Note that if you want to contribute to the
feature, we have project quota based issue open
<https://github.com/gluster/glusterfs/issues/184>[2] Happy to get
contributions, and help in getting a newer approach to
Quota.------------------------------These are our set of initial features
which we propose to take out of ‘fully’ supported features. While we are in
the process of making the user/developer experience of the project much
better with providing well maintained codebase, we may come up with few
more set of features which we may possibly consider to move out of support,
and hence keep watching this space.[1] - http://review.gluster.org/4809
<http://review.gluster.org/4809>[2] -
https://github.com/gluster/glusterfs/issues/184
<https://github.com/gluster/glusterfs/issues/184>Regards,Vijay, Shyam, Amar*
Jim Kinney
2018-07-19 12:36:38 UTC
Permalink
Too bad the RDMA will be abandoned. It's the perfect transport for intranode processing and data sync.

I currently use RDMA on a computational cluster between nodes and gluster storage. The older IB cards will support 10G IP and 40G IB. I've had some success with connectivity but am still faltering with fuse performance. As soon as some retired gear is reconnected I'll have a test bed for HA NFS over RDMA to computational cluster and 10G IP to non-cluster systems.

But it looks like Gluster 6 is a ways away so maybe I'll get more hardware or time to pitch in some code after groking enough IB.

Thanks for the heads up and all the work to date.
Post by Amar Tumballi
*Hi all,Over last 12 years of Gluster, we have developed many features, and
continue to support most of it till now. But along the way, we have figured
out better methods of doing things. Also we are not actively
maintaining
some of these features.We are now thinking of cleaning up some of these
‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be totally
taken out of codebase in following releases) in next upcoming release,
v5.0. The release notes will provide options for smoothly migrating to the
supported configurations.If you are using any of these features, do let us
know, so that we can help you with ‘migration’.. Also, we are happy to
guide new developers to work on those components which are not actively
being maintained by current set of developers.List of features hitting
sunset:‘cluster/stripe’ translator:This translator was developed very early
in the evolution of GlusterFS, and addressed one of the very common
question of Distributed FS, which is “What happens if one of my file is
bigger than the available brick. Say, I have 2 TB hard drive, exported in
glusterfs, my file is 3 TB”. While it solved the purpose, it was very hard
to handle failure scenarios, and give a real good experience to our users
with this feature. Over the time, Gluster solved the problem with it’s
‘Shard’ feature, which solves the problem in much better way, and provides
much better solution with existing well supported stack. Hence the proposal
for Deprecation.If you are using this feature, then do write to us, as it
needs a proper migration from existing volume to a new full supported
volume type before you upgrade.‘storage/bd’ translator:This feature got
into the code base 5 years back with this patch
<http://review.gluster.org/4809>[1]. Plan was to use a block device
directly as a brick, which would help to handle disk-image storage much
easily in glusterfs.As the feature is not getting more contribution, and we
are not seeing any user traction on this, would like to propose for
Deprecation.If you are using the feature, plan to move to a supported
gluster volume configuration, and have your setup ‘supported’ before
upgrading to your new gluster version.‘RDMA’ transport support:Gluster
started supporting RDMA while ib-verbs was still new, and very high-end
infra around that time were using Infiniband. Engineers did work with
Mellanox, and got the technology into GlusterFS for better data
migration,
data copy. While current day kernels support very good speed with IPoIB
module itself, and there are no more bandwidth for experts in these area to
maintain the feature, we recommend migrating over to TCP (IP based) network
for your volume.If you are successfully using RDMA transport, do get in
touch with us to prioritize the migration plan for your volume. Plan is to
work on this after the release, so by version 6.0, we will have a cleaner
transport code, which just needs to support one type.‘Tiering’
featureGluster’s tiering feature which was planned to be providing an
option to keep your ‘hot’ data in different location than your cold data,
so one can get better performance. While we saw some users for the feature,
it needs much more attention to be completely bug free. At the time, we are
not having any active maintainers for the feature, and hence suggesting to
take it out of the ‘supported’ tag.If you are willing to take it up, and
maintain it, do let us know, and we are happy to assist you.If you are
already using tiering feature, before upgrading, make sure to do gluster
volume tier detach all the bricks before upgrading to next release. Also,
we recommend you to use features like dmcache on your LVM setup to get best
performance from bricks.‘Quota’This is a call out for ‘Quota’ feature, to
let you all know that it will be ‘no new development’ state. While this
feature is ‘actively’ in use by many people, the challenges we have in
accounting mechanisms involved, has made it hard to achieve good
performance with the feature. Also, the amount of extended attribute
get/set operations while using the feature is not very ideal. Hence we
recommend our users to move towards setting quota on backend bricks
directly (ie, XFS project quota), or to use different volumes for different
directories etc.As the feature wouldn’t be deprecated immediately, the
feature doesn’t need a migration plan when you upgrade to newer version,
but if you are a new user, we wouldn’t recommend setting quota feature. By
the release dates, we will be publishing our best alternatives guide for
gluster’s current quota feature.Note that if you want to contribute to the
feature, we have project quota based issue open
<https://github.com/gluster/glusterfs/issues/184>[2] Happy to get
contributions, and help in getting a newer approach to
Quota.------------------------------These are our set of initial features
which we propose to take out of ‘fully’ supported features. While we are in
the process of making the user/developer experience of the project much
better with providing well maintained codebase, we may come up with few
more set of features which we may possibly consider to move out of support,
and hence keep watching this space.[1] - http://review.gluster.org/4809
<http://review.gluster.org/4809>[2] -
https://github.com/gluster/glusterfs/issues/184
<https://github.com/gluster/glusterfs/issues/184>Regards,Vijay, Shyam, Amar*
--
Sent from my Android device with K-9 Mail. All tyopes are thumb related and reflect authenticity.
Amar Tumballi
2018-07-19 12:43:02 UTC
Permalink
Post by Jim Kinney
Too bad the RDMA will be abandoned. It's the perfect transport for
intranode processing and data sync.
I currently use RDMA on a computational cluster between nodes and gluster
storage. The older IB cards will support 10G IP and 40G IB. I've had some
success with connectivity but am still faltering with fuse performance. As
soon as some retired gear is reconnected I'll have a test bed for HA NFS
over RDMA to computational cluster and 10G IP to non-cluster systems.
But it looks like Gluster 6 is a ways away so maybe I'll get more hardware
or time to pitch in some code after groking enough IB.
We are happy to continue to make releases with RDMA for some more time if
there are users. The "proposal" is to make sure we give enough heads up
about the experts in that area not having cycles to make any more
enhancements to the feature.
Post by Jim Kinney
Thanks for the heads up and all the work to date.
Glad to hear back from you! Makes us realize there are things which we
haven't touched in some time, but people using them.

Thanks,
Amar
Post by Jim Kinney
Post by Amar Tumballi
*Hi all,Over last 12 years of Gluster, we have developed many features,
and continue to support most of it till now. But along the way, we have
figured out better methods of doing things. Also we are not actively
maintaining some of these features.We are now thinking of cleaning up some
of these ‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be
totally taken out of codebase in following releases) in next upcoming
release, v5.0. The release notes will provide options for smoothly
migrating to the supported configurations.If you are using any of these
features, do let us know, so that we can help you with ‘migration’.. Also,
we are happy to guide new developers to work on those components which are
not actively being maintained by current set of developers.List of features
hitting sunset:‘cluster/stripe’ translator:This translator was developed
very early in the evolution of GlusterFS, and addressed one of the very
common question of Distributed FS, which is “What happens if one of my file
is bigger than the available brick. Say, I have 2 TB hard drive, exported
in glusterfs, my file is 3 TB”. While it solved the purpose, it was very
hard to handle failure scenarios, and give a real good experience to our
users with this feature. Over the time, Gluster solved the problem with
it’s ‘Shard’ feature, which solves the problem in much better way, and
provides much better solution with existing well supported stack. Hence the
proposal for Deprecation.If you are using this feature, then do write to
us, as it needs a proper migration from existing volume to a new full
supported volume type before you upgrade.‘storage/bd’ translator:This
feature got into the code base 5 years back with this patch
<http://review.gluster.org/4809>[1]. Plan was to use a block device
directly as a brick, which would help to handle disk-image storage much
easily in glusterfs.As the feature is not getting more contribution, and we
are not seeing any user traction on this, would like to propose for
Deprecation.If you are using the feature, plan to move to a supported
gluster volume configuration, and have your setup ‘supported’ before
upgrading to your new gluster version.‘RDMA’ transport support:Gluster
started supporting RDMA while ib-verbs was still new, and very high-end
infra around that time were using Infiniband. Engineers did work with
Mellanox, and got the technology into GlusterFS for better data migration,
data copy. While current day kernels support very good speed with IPoIB
module itself, and there are no more bandwidth for experts in these area to
maintain the feature, we recommend migrating over to TCP (IP based) network
for your volume.If you are successfully using RDMA transport, do get in
touch with us to prioritize the migration plan for your volume. Plan is to
work on this after the release, so by version 6.0, we will have a cleaner
transport code, which just needs to support one type.‘Tiering’
featureGluster’s tiering feature which was planned to be providing an
option to keep your ‘hot’ data in different location than your cold data,
so one can get better performance. While we saw some users for the feature,
it needs much more attention to be completely bug free. At the time, we are
not having any active maintainers for the feature, and hence suggesting to
take it out of the ‘supported’ tag.If you are willing to take it up, and
maintain it, do let us know, and we are happy to assist you.If you are
already using tiering feature, before upgrading, make sure to do gluster
volume tier detach all the bricks before upgrading to next release. Also,
we recommend you to use features like dmcache on your LVM setup to get best
performance from bricks.‘Quota’This is a call out for ‘Quota’ feature, to
let you all know that it will be ‘no new development’ state. While this
feature is ‘actively’ in use by many people, the challenges we have in
accounting mechanisms involved, has made it hard to achieve good
performance with the feature. Also, the amount of extended attribute
get/set operations while using the feature is not very ideal. Hence we
recommend our users to move towards setting quota on backend bricks
directly (ie, XFS project quota), or to use different volumes for different
directories etc.As the feature wouldn’t be deprecated immediately, the
feature doesn’t need a migration plan when you upgrade to newer version,
but if you are a new user, we wouldn’t recommend setting quota feature. By
the release dates, we will be publishing our best alternatives guide for
gluster’s current quota feature.Note that if you want to contribute to the
feature, we have project quota based issue open
<https://github.com/gluster/glusterfs/issues/184>[2] Happy to get
contributions, and help in getting a newer approach to
Quota.------------------------------These are our set of initial features
which we propose to take out of ‘fully’ supported features. While we are in
the process of making the user/developer experience of the project much
better with providing well maintained codebase, we may come up with few
more set of features which we may possibly consider to move out of support,
and hence keep watching this space.[1] - http://review.gluster.org/4809
<http://review.gluster.org/4809>[2] -
https://github.com/gluster/glusterfs/issues/184
<https://github.com/gluster/glusterfs/issues/184>Regards,Vijay, Shyam, Amar*
--
Sent from my Android device with K-9 Mail. All tyopes are thumb related
and reflect authenticity.
--
Amar Tumballi (amarts)
mabi
2018-07-19 13:16:41 UTC
Permalink
Hi Amar,

Just wanted to say that I think the quota feature in GlusterFS is really useful. In my case I use it on one volume where I have many cloud installations (mostly files) for different people and all these need to have a different quota set on a specific directory. The GlusterFS quota allows me nicely to manage that which would not be possible in the application directly. It would really be an overhead for me to for example to have one volume per installation just because of setting the max size like that.

I hope that this feature can continue to exist.

Best regards,
M.

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
Post by Amar Tumballi
Hi all,
Over last 12 years of Gluster, we have developed many features, and continue to support most of it till now. But along the way, we have figured out better methods of doing things. Also we are not actively maintaining some of these features.
We are now thinking of cleaning up some of these ‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be totally taken out of codebase in following releases) in next upcoming release, v5.0. The release notes will provide options for smoothly migrating to the supported configurations.
If you are using any of these features, do let us know, so that we can help you with ‘migration’.. Also, we are happy to guide new developers to work on those components which are not actively being maintained by current set of developers.
This translator was developed very early in the evolution of GlusterFS, and addressed one of the very common question of Distributed FS, which is “What happens if one of my file is bigger than the available brick. Say, I have 2 TB hard drive, exported in glusterfs, my file is 3 TB”. While it solved the purpose, it was very hard to handle failure scenarios, and give a real good experience to our users with this feature. Over the time, Gluster solved the problem with it’s ‘Shard’ feature, which solves the problem in much better way, and provides much better solution with existing well supported stack. Hence the proposal for Deprecation.
If you are using this feature, then do write to us, as it needs a proper migration from existing volume to a new full supported volume type before you upgrade.
This feature got into the code base 5 years back with this [patch](http://review.gluster.org/4809)[1]. Plan was to use a block device directly as a brick, which would help to handle disk-image storage much easily in glusterfs.
As the feature is not getting more contribution, and we are not seeing any user traction on this, would like to propose for Deprecation.
If you are using the feature, plan to move to a supported gluster volume configuration, and have your setup ‘supported’ before upgrading to your new gluster version.
Gluster started supporting RDMA while ib-verbs was still new, and very high-end infra around that time were using Infiniband. Engineers did work with Mellanox, and got the technology into GlusterFS for better data migration, data copy. While current day kernels support very good speed with IPoIB module itself, and there are no more bandwidth for experts in these area to maintain the feature, we recommend migrating over to TCP (IP based) network for your volume.
If you are successfully using RDMA transport, do get in touch with us to prioritize the migration plan for your volume. Plan is to work on this after the release, so by version 6.0, we will have a cleaner transport code, which just needs to support one type.
‘Tiering’ feature
Gluster’s tiering feature which was planned to be providing an option to keep your ‘hot’ data in different location than your cold data, so one can get better performance. While we saw some users for the feature, it needs much more attention to be completely bug free. At the time, we are not having any active maintainers for the feature, and hence suggesting to take it out of the ‘supported’ tag.
If you are willing to take it up, and maintain it, do let us know, and we are happy to assist you.
If you are already using tiering feature, before upgrading, make sure to do gluster volume tier detach all the bricks before upgrading to next release. Also, we recommend you to use features like dmcache on your LVM setup to get best performance from bricks.
‘Quota’
This is a call out for ‘Quota’ feature, to let you all know that it will be ‘no new development’ state. While this feature is ‘actively’ in use by many people, the challenges we have in accounting mechanisms involved, has made it hard to achieve good performance with the feature. Also, the amount of extended attribute get/set operations while using the feature is not very ideal. Hence we recommend our users to move towards setting quota on backend bricks directly (ie, XFS project quota), or to use different volumes for different directories etc.
As the feature wouldn’t be deprecated immediately, the feature doesn’t need a migration plan when you upgrade to newer version, but if you are a new user, we wouldn’t recommend setting quota feature. By the release dates, we will be publishing our best alternatives guide for gluster’s current quota feature.
Note that if you want to contribute to the feature, we have [project quota based issue open](https://github.com/gluster/glusterfs/issues/184)[2] Happy to get contributions, and help in getting a newer approach to Quota.
---------------------------------------------------------------
These are our set of initial features which we propose to take out of ‘fully’ supported features. While we are in the process of making the user/developer experience of the project much better with providing well maintained codebase, we may come up with few more set of features which we may possibly consider to move out of support, and hence keep watching this space.
[1] - http://review.gluster.org/4809
[2] - https://github.com/gluster/glusterfs/issues/184
Regards,
Vijay, Shyam, Amar
Amar Tumballi
2018-07-20 07:05:42 UTC
Permalink
Post by mabi
Hi Amar,
Just wanted to say that I think the quota feature in GlusterFS is really
useful. In my case I use it on one volume where I have many cloud
installations (mostly files) for different people and all these need to
have a different quota set on a specific directory. The GlusterFS quota
allows me nicely to manage that which would not be possible in the
application directly. It would really be an overhead for me to for example
to have one volume per installation just because of setting the max size
like that.
I hope that this feature can continue to exist.
Thanks for the feedback. We will consider this use-case.
Post by mabi
Best regards,
M.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
Hi all,
Over last 12 years of Gluster, we have developed many features, and
continue to support most of it till now. But along the way, we have figured
out better methods of doing things. Also we are not actively maintaining
some of these features.
We are now thinking of cleaning up some of these ‘unsupported’ features,
and mark them as ‘SunSet’ (i.e., would be totally taken out of codebase in
following releases) in next upcoming release, v5.0. The release notes
will provide options for smoothly migrating to the supported configurations.
If you are using any of these features, do let us know, so that we can
help you with ‘migration’.. Also, we are happy to guide new developers to
work on those components which are not actively being maintained by current
set of developers.
*List of features hitting sunset:*
*‘cluster/stripe’ translator:*
This translator was developed very early in the evolution of GlusterFS,
and addressed one of the very common question of Distributed FS, which is
“What happens if one of my file is bigger than the available brick. Say, I
have 2 TB hard drive, exported in glusterfs, my file is 3 TB”. While it
solved the purpose, it was very hard to handle failure scenarios, and give
a real good experience to our users with this feature. Over the time,
Gluster solved the problem with it’s ‘Shard’ feature, which solves the
problem in much better way, and provides much better solution with existing
well supported stack. Hence the proposal for Deprecation.
If you are using this feature, then do write to us, as it needs a proper
migration from existing volume to a new full supported volume type before
you upgrade.
*‘storage/bd’ translator:*
This feature got into the code base 5 years back with this *patch*
<http://review.gluster.org/4809>[1]. Plan was to use a block device
directly as a brick, which would help to handle disk-image storage much
easily in glusterfs.
As the feature is not getting more contribution, and we are not seeing any
user traction on this, would like to propose for Deprecation.
If you are using the feature, plan to move to a supported gluster volume
configuration, and have your setup ‘supported’ before upgrading to your new
gluster version.
*‘RDMA’ transport support:*
Gluster started supporting RDMA while ib-verbs was still new, and very
high-end infra around that time were using Infiniband. Engineers did work
with Mellanox, and got the technology into GlusterFS for better data
migration, data copy. While current day kernels support very good speed
with IPoIB module itself, and there are no more bandwidth for experts in
these area to maintain the feature, we recommend migrating over to TCP (IP
based) network for your volume.
If you are successfully using RDMA transport, do get in touch with us to
prioritize the migration plan for your volume. Plan is to work on this
after the release, so by version 6.0, we will have a cleaner transport
code, which just needs to support one type.
*‘Tiering’ feature*
Gluster’s tiering feature which was planned to be providing an option to
keep your ‘hot’ data in different location than your cold data, so one can
get better performance. While we saw some users for the feature, it needs
much more attention to be completely bug free. At the time, we are not
having any active maintainers for the feature, and hence suggesting to take
it out of the ‘supported’ tag.
If you are willing to take it up, and maintain it, do let us know, and we
are happy to assist you.
If you are already using tiering feature, before upgrading, make sure to
do gluster volume tier detach all the bricks before upgrading to next
release. Also, we recommend you to use features like dmcache on your LVM
setup to get best performance from bricks.
*‘Quota’*
This is a call out for ‘Quota’ feature, to let you all know that it will
be ‘no new development’ state. While this feature is ‘actively’ in use by
many people, the challenges we have in accounting mechanisms involved, has
made it hard to achieve good performance with the feature. Also, the amount
of extended attribute get/set operations while using the feature is not
very ideal. Hence we recommend our users to move towards setting quota on
backend bricks directly (ie, XFS project quota), or to use different
volumes for different directories etc.
As the feature wouldn’t be deprecated immediately, the feature doesn’t
need a migration plan when you upgrade to newer version, but if you are a
new user, we wouldn’t recommend setting quota feature. By the release
dates, we will be publishing our best alternatives guide for gluster’s
current quota feature.
Note that if you want to contribute to the feature, we have *project
quota based issue open* <https://github.com/gluster/glusterfs/issues/184>[2]
Happy to get contributions, and help in getting a newer approach to Quota.
------------------------------
These are our set of initial features which we propose to take out of
‘fully’ supported features. While we are in the process of making the
user/developer experience of the project much better with providing well
maintained codebase, we may come up with few more set of features which we
may possibly consider to move out of support, and hence keep watching this
space.
[1] - *http://review.gluster.org/4809* <http://review.gluster.org/4809>
[2] - *https://github.com/gluster/glusterfs/issues/184
<https://github.com/gluster/glusterfs/issues/184>*
Regards,
Vijay, Shyam, Amar
--
Amar Tumballi (amarts)
Gudrun Mareike Amedick
2018-07-23 14:51:07 UTC
Permalink
Hi,

we're planning a dispersed volume with at least 50 project directories. Each of those has its own quota ranging between 0.1TB and 200TB. Comparing XFS
project quotas over several servers and bricks to make sure their total matches the desired value doesn't really sound practical. It would probably be
possible to create and maintain 50 volumes and more, but it doesn't seem to be a desirable solution. The quotas aren't fixed and resizing a volume is
not as trivial as changing the quota. 

Quota was in the past and still is a very comfortable way to solve this.

But what is the new recommended way for such a setting when the quota is going to be deprecated?

Kind regards

Gudrun
Post by Amar Tumballi
Hi all,
Over last 12 years of Gluster, we have developed many features, and continue to support most of it till now. But along the way, we have figured out
better methods of doing things. Also we are not actively maintaining some of these features.
We are now thinking of cleaning up some of these ‘unsupported’ features, and mark them as ‘SunSet’ (i.e., would be totally taken out of codebase in
following releases) in next upcoming release, v5.0. The release notes will provide options for smoothly migrating to the supported configurations.
If you are using any of these features, do let us know, so that we can help you with ‘migration’.. Also, we are happy to guide new developers to
work on those components which are not actively being maintained by current set of developers.
This translator was developed very early in the evolution of GlusterFS, and addressed one of the very common question of Distributed FS, which is
“What happens if one of my file is bigger than the available brick. Say, I have 2 TB hard drive, exported in glusterfs, my file is 3 TB”. While it
solved the purpose, it was very hard to handle failure scenarios, and give a real good experience to our users with this feature. Over the time,
Gluster solved the problem with it’s ‘Shard’ feature, which solves the problem in much better way, and provides much better solution with existing
well supported stack. Hence the proposal for Deprecation.
If you are using this feature, then do write to us, as it needs a proper migration from existing volume to a new full supported volume type before
you upgrade.
This feature got into the code base 5 years back with this patch[1]. Plan was to use a block device directly as a brick, which would help to handle
disk-image storage much easily in glusterfs.
As the feature is not getting more contribution, and we are not seeing any user traction on this, would like to propose for Deprecation.
If you are using the feature, plan to move to a supported gluster volume configuration, and have your setup ‘supported’ before upgrading to your new
gluster version.
Gluster started supporting RDMA while ib-verbs was still new, and very high-end infra around that time were using Infiniband. Engineers did work
with Mellanox, and got the technology into GlusterFS for better data migration, data copy. While current day kernels support very good speed with
IPoIB module itself, and there are no more bandwidth for experts in these area to maintain the feature, we recommend migrating over to TCP (IP
based) network for your volume.
If you are successfully using RDMA transport, do get in touch with us to prioritize the migration plan for your volume. Plan is to work on this
after the release, so by version 6.0, we will have a cleaner transport code, which just needs to support one type.
‘Tiering’ feature
Gluster’s tiering feature which was planned to be providing an option to keep your ‘hot’ data in different location than your cold data, so one can
get better performance. While we saw some users for the feature, it needs much more attention to be completely bug free. At the time, we are not
having any active maintainers for the feature, and hence suggesting to take it out of the ‘supported’ tag.
If you are willing to take it up, and maintain it, do let us know, and we are happy to assist you.
If you are already using tiering feature, before upgrading, make sure to do gluster volume tier detach all the bricks before upgrading to next
release. Also, we recommend you to use features like dmcache on your LVM setup to get best performance from bricks.
‘Quota’
This is a call out for ‘Quota’ feature, to let you all know that it will be ‘no new development’ state. While this feature is ‘actively’ in use by
many people, the challenges we have in accounting mechanisms involved, has made it hard to achieve good performance with the feature. Also, the
amount of extended attribute get/set operations while using the feature is not very ideal. Hence we recommend our users to move towards setting
quota on backend bricks directly (ie, XFS project quota), or to use different volumes for different directories etc.
As the feature wouldn’t be deprecated immediately, the feature doesn’t need a migration plan when you upgrade to newer version, but if you are a new
user, we wouldn’t recommend setting quota feature. By the release dates, we will be publishing our best alternatives guide for gluster’s current
quota feature.
Note that if you want to contribute to the feature, we have project quota based issue open[2] Happy to get contributions, and help in getting a
newer approach to Quota.
These are our set of initial features which we propose to take out of ‘fully’ supported features. While we are in the process of making the
user/developer experience of the project much better with providing well maintained codebase, we may come up with few more set of features which we
may possibly consider to move out of support, and hence keep watching this space.
[1] - http://review.gluster.org/4809
[2] - https://github.com/gluster/glusterfs/issues/184
Regards,
Vijay, Shyam, Amar
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
Amar Tumballi
2018-07-23 16:21:32 UTC
Permalink
On Mon, Jul 23, 2018 at 8:21 PM, Gudrun Mareike Amedick <
Post by Gudrun Mareike Amedick
Hi,
we're planning a dispersed volume with at least 50 project directories.
Each of those has its own quota ranging between 0.1TB and 200TB. Comparing
XFS
project quotas over several servers and bricks to make sure their total
matches the desired value doesn't really sound practical. It would probably
be
possible to create and maintain 50 volumes and more, but it doesn't seem
to be a desirable solution. The quotas aren't fixed and resizing a volume is
not as trivial as changing the quota.
Quota was in the past and still is a very comfortable way to solve this.
But what is the new recommended way for such a setting when the quota is
going to be deprecated?
Thanks for the feedback. Helps us to prioritize. Will get back on this.

-Amar
Post by Gudrun Mareike Amedick
Kind regards
Gudrun
Post by Amar Tumballi
Hi all,
Over last 12 years of Gluster, we have developed many features, and
continue to support most of it till now. But along the way, we have figured
out
Post by Amar Tumballi
better methods of doing things. Also we are not actively maintaining
some of these features.
Post by Amar Tumballi
We are now thinking of cleaning up some of these ‘unsupported’ features,
and mark them as ‘SunSet’ (i.e., would be totally taken out of codebase in
Post by Amar Tumballi
following releases) in next upcoming release, v5.0. The release notes
will provide options for smoothly migrating to the supported configurations.
Post by Amar Tumballi
If you are using any of these features, do let us know, so that we can
help you with ‘migration’.. Also, we are happy to guide new developers to
Post by Amar Tumballi
work on those components which are not actively being maintained by
current set of developers.
Post by Amar Tumballi
This translator was developed very early in the evolution of GlusterFS,
and addressed one of the very common question of Distributed FS, which is
Post by Amar Tumballi
“What happens if one of my file is bigger than the available brick. Say,
I have 2 TB hard drive, exported in glusterfs, my file is 3 TB”. While it
Post by Amar Tumballi
solved the purpose, it was very hard to handle failure scenarios, and
give a real good experience to our users with this feature. Over the time,
Post by Amar Tumballi
Gluster solved the problem with it’s ‘Shard’ feature, which solves the
problem in much better way, and provides much better solution with existing
Post by Amar Tumballi
well supported stack. Hence the proposal for Deprecation.
If you are using this feature, then do write to us, as it needs a proper
migration from existing volume to a new full supported volume type before
Post by Amar Tumballi
you upgrade.
This feature got into the code base 5 years back with this patch[1].
Plan was to use a block device directly as a brick, which would help to
handle
Post by Amar Tumballi
disk-image storage much easily in glusterfs.
As the feature is not getting more contribution, and we are not seeing
any user traction on this, would like to propose for Deprecation.
Post by Amar Tumballi
If you are using the feature, plan to move to a supported gluster volume
configuration, and have your setup ‘supported’ before upgrading to your new
Post by Amar Tumballi
gluster version.
Gluster started supporting RDMA while ib-verbs was still new, and very
high-end infra around that time were using Infiniband. Engineers did work
Post by Amar Tumballi
with Mellanox, and got the technology into GlusterFS for better data
migration, data copy. While current day kernels support very good speed with
Post by Amar Tumballi
IPoIB module itself, and there are no more bandwidth for experts in
these area to maintain the feature, we recommend migrating over to TCP (IP
Post by Amar Tumballi
based) network for your volume.
If you are successfully using RDMA transport, do get in touch with us to
prioritize the migration plan for your volume. Plan is to work on this
Post by Amar Tumballi
after the release, so by version 6.0, we will have a cleaner transport
code, which just needs to support one type.
Post by Amar Tumballi
‘Tiering’ feature
Gluster’s tiering feature which was planned to be providing an option to
keep your ‘hot’ data in different location than your cold data, so one can
Post by Amar Tumballi
get better performance. While we saw some users for the feature, it
needs much more attention to be completely bug free. At the time, we are not
Post by Amar Tumballi
having any active maintainers for the feature, and hence suggesting to
take it out of the ‘supported’ tag.
Post by Amar Tumballi
If you are willing to take it up, and maintain it, do let us know, and
we are happy to assist you.
Post by Amar Tumballi
If you are already using tiering feature, before upgrading, make sure to
do gluster volume tier detach all the bricks before upgrading to next
Post by Amar Tumballi
release. Also, we recommend you to use features like dmcache on your LVM
setup to get best performance from bricks.
Post by Amar Tumballi
‘Quota’
This is a call out for ‘Quota’ feature, to let you all know that it will
be ‘no new development’ state. While this feature is ‘actively’ in use by
Post by Amar Tumballi
many people, the challenges we have in accounting mechanisms involved,
has made it hard to achieve good performance with the feature. Also, the
Post by Amar Tumballi
amount of extended attribute get/set operations while using the feature
is not very ideal. Hence we recommend our users to move towards setting
Post by Amar Tumballi
quota on backend bricks directly (ie, XFS project quota), or to use
different volumes for different directories etc.
Post by Amar Tumballi
As the feature wouldn’t be deprecated immediately, the feature doesn’t
need a migration plan when you upgrade to newer version, but if you are a
new
Post by Amar Tumballi
user, we wouldn’t recommend setting quota feature. By the release dates,
we will be publishing our best alternatives guide for gluster’s current
Post by Amar Tumballi
quota feature.
Note that if you want to contribute to the feature, we have project
quota based issue open[2] Happy to get contributions, and help in getting a
Post by Amar Tumballi
newer approach to Quota.
These are our set of initial features which we propose to take out of
‘fully’ supported features. While we are in the process of making the
Post by Amar Tumballi
user/developer experience of the project much better with providing well
maintained codebase, we may come up with few more set of features which we
Post by Amar Tumballi
may possibly consider to move out of support, and hence keep watching
this space.
Post by Amar Tumballi
[1] - http://review.gluster.org/4809
[2] - https://github.com/gluster/glusterfs/issues/184
Regards,
Vijay, Shyam, Amar
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Amar Tumballi (amarts)
Davide Obbi
2018-08-22 14:16:22 UTC
Permalink
Hi Amar,

we are also going to start using glusterfs with quotas for home folders.
Quotas is one of the main requirements and i'd like to add a +1 to keep the
quota feature, as already said maintaining quotas for each brick at Xfs
level does not seem really practical.

thanks
Post by Amar Tumballi
On Mon, Jul 23, 2018 at 8:21 PM, Gudrun Mareike Amedick <
Post by Gudrun Mareike Amedick
Hi,
we're planning a dispersed volume with at least 50 project directories.
Each of those has its own quota ranging between 0.1TB and 200TB. Comparing
XFS
project quotas over several servers and bricks to make sure their total
matches the desired value doesn't really sound practical. It would probably
be
possible to create and maintain 50 volumes and more, but it doesn't seem
to be a desirable solution. The quotas aren't fixed and resizing a volume is
not as trivial as changing the quota.
Quota was in the past and still is a very comfortable way to solve this.
But what is the new recommended way for such a setting when the quota is
going to be deprecated?
Thanks for the feedback. Helps us to prioritize. Will get back on this.
-Amar
Post by Gudrun Mareike Amedick
Kind regards
Gudrun
Post by Amar Tumballi
Hi all,
Over last 12 years of Gluster, we have developed many features, and
continue to support most of it till now. But along the way, we have figured
out
Post by Amar Tumballi
better methods of doing things. Also we are not actively maintaining
some of these features.
Post by Amar Tumballi
We are now thinking of cleaning up some of these ‘unsupported’
features, and mark them as ‘SunSet’ (i.e., would be totally taken out of
codebase in
Post by Amar Tumballi
following releases) in next upcoming release, v5.0. The release notes
will provide options for smoothly migrating to the supported configurations.
Post by Amar Tumballi
If you are using any of these features, do let us know, so that we can
help you with ‘migration’.. Also, we are happy to guide new developers to
Post by Amar Tumballi
work on those components which are not actively being maintained by
current set of developers.
Post by Amar Tumballi
This translator was developed very early in the evolution of GlusterFS,
and addressed one of the very common question of Distributed FS, which is
Post by Amar Tumballi
“What happens if one of my file is bigger than the available brick.
Say, I have 2 TB hard drive, exported in glusterfs, my file is 3 TB”. While
it
Post by Amar Tumballi
solved the purpose, it was very hard to handle failure scenarios, and
give a real good experience to our users with this feature. Over the time,
Post by Amar Tumballi
Gluster solved the problem with it’s ‘Shard’ feature, which solves the
problem in much better way, and provides much better solution with existing
Post by Amar Tumballi
well supported stack. Hence the proposal for Deprecation.
If you are using this feature, then do write to us, as it needs a
proper migration from existing volume to a new full supported volume type
before
Post by Amar Tumballi
you upgrade.
This feature got into the code base 5 years back with this patch[1].
Plan was to use a block device directly as a brick, which would help to
handle
Post by Amar Tumballi
disk-image storage much easily in glusterfs.
As the feature is not getting more contribution, and we are not seeing
any user traction on this, would like to propose for Deprecation.
Post by Amar Tumballi
If you are using the feature, plan to move to a supported gluster
volume configuration, and have your setup ‘supported’ before upgrading to
your new
Post by Amar Tumballi
gluster version.
Gluster started supporting RDMA while ib-verbs was still new, and very
high-end infra around that time were using Infiniband. Engineers did work
Post by Amar Tumballi
with Mellanox, and got the technology into GlusterFS for better data
migration, data copy. While current day kernels support very good speed with
Post by Amar Tumballi
IPoIB module itself, and there are no more bandwidth for experts in
these area to maintain the feature, we recommend migrating over to TCP (IP
Post by Amar Tumballi
based) network for your volume.
If you are successfully using RDMA transport, do get in touch with us
to prioritize the migration plan for your volume. Plan is to work on this
Post by Amar Tumballi
after the release, so by version 6.0, we will have a cleaner transport
code, which just needs to support one type.
Post by Amar Tumballi
‘Tiering’ feature
Gluster’s tiering feature which was planned to be providing an option
to keep your ‘hot’ data in different location than your cold data, so one
can
Post by Amar Tumballi
get better performance. While we saw some users for the feature, it
needs much more attention to be completely bug free. At the time, we are not
Post by Amar Tumballi
having any active maintainers for the feature, and hence suggesting to
take it out of the ‘supported’ tag.
Post by Amar Tumballi
If you are willing to take it up, and maintain it, do let us know, and
we are happy to assist you.
Post by Amar Tumballi
If you are already using tiering feature, before upgrading, make sure
to do gluster volume tier detach all the bricks before upgrading to next
Post by Amar Tumballi
release. Also, we recommend you to use features like dmcache on your
LVM setup to get best performance from bricks.
Post by Amar Tumballi
‘Quota’
This is a call out for ‘Quota’ feature, to let you all know that it
will be ‘no new development’ state. While this feature is ‘actively’ in use
by
Post by Amar Tumballi
many people, the challenges we have in accounting mechanisms involved,
has made it hard to achieve good performance with the feature. Also, the
Post by Amar Tumballi
amount of extended attribute get/set operations while using the feature
is not very ideal. Hence we recommend our users to move towards setting
Post by Amar Tumballi
quota on backend bricks directly (ie, XFS project quota), or to use
different volumes for different directories etc.
Post by Amar Tumballi
As the feature wouldn’t be deprecated immediately, the feature doesn’t
need a migration plan when you upgrade to newer version, but if you are a
new
Post by Amar Tumballi
user, we wouldn’t recommend setting quota feature. By the release
dates, we will be publishing our best alternatives guide for gluster’s
current
Post by Amar Tumballi
quota feature.
Note that if you want to contribute to the feature, we have project
quota based issue open[2] Happy to get contributions, and help in getting a
Post by Amar Tumballi
newer approach to Quota.
These are our set of initial features which we propose to take out of
‘fully’ supported features. While we are in the process of making the
Post by Amar Tumballi
user/developer experience of the project much better with providing
well maintained codebase, we may come up with few more set of features
which we
Post by Amar Tumballi
may possibly consider to move out of support, and hence keep watching
this space.
Post by Amar Tumballi
[1] - http://review.gluster.org/4809
[2] - https://github.com/gluster/glusterfs/issues/184
Regards,
Vijay, Shyam, Amar
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Amar Tumballi (amarts)
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Davide Obbi
System Administrator

Booking.com B.V.
Vijzelstraat 66-80 Amsterdam 1017HL Netherlands
Direct +31207031558
[image: Booking.com] <https://www.booking.com/>
The world's #1 accommodation site
43 languages, 198+ offices worldwide, 120,000+ global destinations,
1,550,000+ room nights booked every day
No booking fees, best price always guaranteed
Subsidiary of Booking Holdings Inc. (NASDAQ: BKNG)
Loading...