Discussion:
[Gluster-users] Announcing Gluster for Container Storage (GCS)
Vijay Bellur
2018-07-25 13:38:00 UTC
Permalink
Hi all,

We would like to let you know that some of us have started focusing on an
initiative called ‘Gluster for Container Storage’ (in short GCS). As of
now, one can already use Gluster as storage for containers by making use of
different projects available in github repositories associated with gluster
<https://github.com/gluster> & Heketi <https://github.com/heketi/heketi>.
The goal of the GCS initiative is to provide an easier integration of these
projects so that they can be consumed together as designed. We are
primarily focused on integration with Kubernetes (k8s) through this
initiative.

Key projects for GCS include:
Glusterd2 (GD2)

Repo: https://github.com/gluster/glusterd2

The challenge we have with current management layer of Gluster (glusterd)
is that it is not designed for a service oriented architecture. Heketi
overcame this limitation and made Gluster consumable in k8s by providing
all the necessary hooks needed for supporting Persistent Volume Claims.

Glusterd2 provides a service oriented architecture for volume & cluster
management. Gd2 also intends to provide many of the Heketi functionalities
needed by Kubernetes natively. Hence we are working on merging Heketi with
gd2 and you can follow more of this action in the issues associated with
the gd2 github repository.
gluster-block

Repo: https://github.com/gluster/gluster-block

This project intends to expose files in a gluster volume as block devices.
Gluster-block enables supporting ReadWriteOnce (RWO) PVCs and the
corresponding workloads in Kubernetes using gluster as the underlying
storage technology.

Gluster-block is intended to be consumed by stateful RWO applications like
databases and k8s infrastructure services like logging, metrics etc.
gluster-block is more preferred than file based Persistent Volumes in K8s
for stateful/transactional workloads as it provides better performance &
consistency guarantees.
anthill / operator

Repo: https://github.com/gluster/anthill

This project aims to add an operator for Gluster in Kubernetes., Since it
is relatively new, there are areas where you can contribute to make the
operator experience better (please refer to the list of issues). This
project intends to make the whole Gluster experience in k8s much smoother
by automatic management of operator tasks like installation, rolling
upgrades etc.
gluster-csi-driver

Repo: http://github.com/gluster/gluster-csi-driver

This project will provide CSI (Container Storage Interface) compliant
drivers for GlusterFS & gluster-block in k8s.
gluster-kubernetes

Repo: https://github.com/gluster/gluster-kubernetes

This project is intended to provide all the required installation and
management steps for getting gluster up and running in k8s.
GlusterFS

Repo: https://github.com/gluster/glusterfs

GlusterFS is the main and the core repository of Gluster. To support
storage in container world, we don’t need all the features of Gluster.
Hence, we would be focusing on a stack which would be absolutely required
in k8s. This would allow us to plan and execute tests well, and also
provide users with a setup which works without too many options to tweak.

Notice that glusterfs default volumes would continue to work as of now, but
the translator stack which is used in GCS will be much leaner and geared to
work optimally in k8s.
Monitoring
Repo: https://github.com/gluster/gluster-prometheus

As k8s ecosystem provides its own native monitoring mechanisms, we intend
to have this project be the placeholder for required monitoring plugins.
The scope of this project is currently WIP and we welcome your inputs to
shape the project.

More details on this can be found at:
https://lists.gluster.org/pipermail/gluster-users/2018-July/034435.html

Gluster-Containers

*Repo: https://github.com/gluster/gluster-containers
<https://github.com/gluster/gluster-containers>This repository provides
container specs / Dockerfiles that can be used with a container runtime
like cri-o & docker.Note that this is not an exhaustive or final list of
projects involved with GCS. We will continue to update the project list
depending on the new requirements and priorities that we discover in this
journey.*

*We welcome you to join this journey by looking up the repositories and
contributing to them. As always, we are happy to hear your thoughts about
this initiative and please stay tuned as we provide periodic updates about
GCS here!Regards,*

*Vijay*

*(on behalf of Gluster maintainers @ Red Hat)*
Michael Adam
2018-08-23 19:58:03 UTC
Permalink
Post by Vijay Bellur
Hi all,
Hi Vijay,

Thanks for announcing this to the public and making everyone
more aware of Gluster's focus on container storage!

I would like to add an additional perspective to this,
giving some background about the history and origins:

Integrating Gluster with kubernetes for providing
persistent storage for containerized applications is
not new. We have been working on this since more than
two years now, and it is used by many community users
and and many customers (of Red Hat) in production.

The original software stack used heketi
(https://github.com/heketi/heketi) as a high level service
interface for gluster to facilitate the easy self-service for
provisioning volumes in kubernetes. Heketi implemented some ideas
that were originally part of the glusterd2 plans already in a
separate, much more narrowly scoped project to get us started
with these efforts in the first place, and also went beyond those
original ideas. These features are now being merged into
glusterd2 which will in the future replace heketi in the
container storage stack.

We were also working on kubernetes itself, writing the
privisioners for various forms of gluster volumes in kubernets
proper (https://github.com/kubernetes/kubernetes) and also the
external storage repo
(https://github.com/kubernetes-incubator/external-storage).
Those provisioners will eventually be replaced by the mentioned
csi drivers. The expertise of the original kubernetes
development is now flowing into the CSI drivers.

The gluster-containers repository was created and used
for this original container-storage effort already.

The mentioned https://github.com/gluster/gluster-kubernetes
repository was not only the place for storing the deployment
artefacts and tools, but it was actually intended to be the
upstream home of the gluster-container-storage project.

In this view, I see the GCS project announced here
as a GCS version 2. The original first version,
even though never officially announced that widely in a formal
introduction like this, and never given a formal release
or version number (let me call it version one), was the
software stack described above and homed at the
gluster-kubernetes repository. If you look at this project
(and heketi), you see that it has a nice level of popularity.

I think we should make use of this traction instead of
ignoring the legacy, and turn gluster-kubernetes into the
home of GCS (v2). In my view, GCS (v2) will be about:

* replacing some of the components with newer, i.e.
- i.e. glusterd2 instead of the heketi and glusterd1 combo
- csi drivers (the new standard) instead of native
kubernetes plugins
* adding the operator feature,
(even though we are currently also working on an operator
for the current stack with heketi and traditional gluster,
since this will become important in production before
this v2 will be ready.)

These are my 2cents on this topic.
I hope someone finds them useful.

I am very excited to (finally) see the broader gluster
community getting more aligned behind the idea of bringing
our great SDS system into the space of containers! :-)

Cheers - Michael
Post by Vijay Bellur
We would like to let you know that some of us have started focusing on an
initiative called ‘Gluster for Container Storage’ (in short GCS). As of
now, one can already use Gluster as storage for containers by making use of
different projects available in github repositories associated with gluster
<https://github.com/gluster> & Heketi <https://github.com/heketi/heketi>.
The goal of the GCS initiative is to provide an easier integration of these
projects so that they can be consumed together as designed. We are
primarily focused on integration with Kubernetes (k8s) through this
initiative.
Glusterd2 (GD2)
Repo: https://github.com/gluster/glusterd2
The challenge we have with current management layer of Gluster (glusterd)
is that it is not designed for a service oriented architecture. Heketi
overcame this limitation and made Gluster consumable in k8s by providing
all the necessary hooks needed for supporting Persistent Volume Claims.
Glusterd2 provides a service oriented architecture for volume & cluster
management. Gd2 also intends to provide many of the Heketi functionalities
needed by Kubernetes natively. Hence we are working on merging Heketi with
gd2 and you can follow more of this action in the issues associated with
the gd2 github repository.
gluster-block
Repo: https://github.com/gluster/gluster-block
This project intends to expose files in a gluster volume as block devices.
Gluster-block enables supporting ReadWriteOnce (RWO) PVCs and the
corresponding workloads in Kubernetes using gluster as the underlying
storage technology.
Gluster-block is intended to be consumed by stateful RWO applications like
databases and k8s infrastructure services like logging, metrics etc.
gluster-block is more preferred than file based Persistent Volumes in K8s
for stateful/transactional workloads as it provides better performance &
consistency guarantees.
anthill / operator
Repo: https://github.com/gluster/anthill
This project aims to add an operator for Gluster in Kubernetes., Since it
is relatively new, there are areas where you can contribute to make the
operator experience better (please refer to the list of issues). This
project intends to make the whole Gluster experience in k8s much smoother
by automatic management of operator tasks like installation, rolling
upgrades etc.
gluster-csi-driver
Repo: http://github.com/gluster/gluster-csi-driver
This project will provide CSI (Container Storage Interface) compliant
drivers for GlusterFS & gluster-block in k8s.
gluster-kubernetes
Repo: https://github.com/gluster/gluster-kubernetes
This project is intended to provide all the required installation and
management steps for getting gluster up and running in k8s.
GlusterFS
Repo: https://github.com/gluster/glusterfs
GlusterFS is the main and the core repository of Gluster. To support
storage in container world, we don’t need all the features of Gluster.
Hence, we would be focusing on a stack which would be absolutely required
in k8s. This would allow us to plan and execute tests well, and also
provide users with a setup which works without too many options to tweak.
Notice that glusterfs default volumes would continue to work as of now, but
the translator stack which is used in GCS will be much leaner and geared to
work optimally in k8s.
Monitoring
Repo: https://github.com/gluster/gluster-prometheus
As k8s ecosystem provides its own native monitoring mechanisms, we intend
to have this project be the placeholder for required monitoring plugins.
The scope of this project is currently WIP and we welcome your inputs to
shape the project.
https://lists.gluster.org/pipermail/gluster-users/2018-July/034435.html
Gluster-Containers
*Repo: https://github.com/gluster/gluster-containers
<https://github.com/gluster/gluster-containers>This repository provides
container specs / Dockerfiles that can be used with a container runtime
like cri-o & docker.Note that this is not an exhaustive or final list of
projects involved with GCS. We will continue to update the project list
depending on the new requirements and priorities that we discover in this
journey.*
*We welcome you to join this journey by looking up the repositories and
contributing to them. As always, we are happy to hear your thoughts about
this initiative and please stay tuned as we provide periodic updates about
GCS here!Regards,*
*Vijay*
_______________________________________________
Gluster-devel mailing list
https://lists.gluster.org/mailman/listinfo/gluster-devel
Joe Julian
2018-08-23 20:54:29 UTC
Permalink
Personally, I'd like to see the glusterd service replaced by a k8s native controller (named "kluster").

I'm hoping to use this vacation I'm currently on to write up a design doc.
Post by Michael Adam
Post by Vijay Bellur
Hi all,
Hi Vijay,
Thanks for announcing this to the public and making everyone
more aware of Gluster's focus on container storage!
I would like to add an additional perspective to this,
Integrating Gluster with kubernetes for providing
persistent storage for containerized applications is
not new. We have been working on this since more than
two years now, and it is used by many community users
and and many customers (of Red Hat) in production.
The original software stack used heketi
(https://github.com/heketi/heketi) as a high level service
interface for gluster to facilitate the easy self-service for
provisioning volumes in kubernetes. Heketi implemented some ideas
that were originally part of the glusterd2 plans already in a
separate, much more narrowly scoped project to get us started
with these efforts in the first place, and also went beyond those
original ideas. These features are now being merged into
glusterd2 which will in the future replace heketi in the
container storage stack.
We were also working on kubernetes itself, writing the
privisioners for various forms of gluster volumes in kubernets
proper (https://github.com/kubernetes/kubernetes) and also the
external storage repo
(https://github.com/kubernetes-incubator/external-storage).
Those provisioners will eventually be replaced by the mentioned
csi drivers. The expertise of the original kubernetes
development is now flowing into the CSI drivers.
The gluster-containers repository was created and used
for this original container-storage effort already.
The mentioned https://github.com/gluster/gluster-kubernetes
repository was not only the place for storing the deployment
artefacts and tools, but it was actually intended to be the
upstream home of the gluster-container-storage project.
In this view, I see the GCS project announced here
as a GCS version 2. The original first version,
even though never officially announced that widely in a formal
introduction like this, and never given a formal release
or version number (let me call it version one), was the
software stack described above and homed at the
gluster-kubernetes repository. If you look at this project
(and heketi), you see that it has a nice level of popularity.
I think we should make use of this traction instead of
ignoring the legacy, and turn gluster-kubernetes into the
* replacing some of the components with newer, i.e.
- i.e. glusterd2 instead of the heketi and glusterd1 combo
- csi drivers (the new standard) instead of native
kubernetes plugins
* adding the operator feature,
(even though we are currently also working on an operator
for the current stack with heketi and traditional gluster,
since this will become important in production before
this v2 will be ready.)
These are my 2cents on this topic.
I hope someone finds them useful.
I am very excited to (finally) see the broader gluster
community getting more aligned behind the idea of bringing
our great SDS system into the space of containers! :-)
Cheers - Michael
Post by Vijay Bellur
We would like to let you know that some of us have started focusing
on an
Post by Vijay Bellur
initiative called ‘Gluster for Container Storage’ (in short GCS). As
of
Post by Vijay Bellur
now, one can already use Gluster as storage for containers by making
use of
Post by Vijay Bellur
different projects available in github repositories associated with
gluster
Post by Vijay Bellur
<https://github.com/gluster> & Heketi
<https://github.com/heketi/heketi>.
Post by Vijay Bellur
The goal of the GCS initiative is to provide an easier integration of
these
Post by Vijay Bellur
projects so that they can be consumed together as designed. We are
primarily focused on integration with Kubernetes (k8s) through this
initiative.
Glusterd2 (GD2)
Repo: https://github.com/gluster/glusterd2
The challenge we have with current management layer of Gluster
(glusterd)
Post by Vijay Bellur
is that it is not designed for a service oriented architecture.
Heketi
Post by Vijay Bellur
overcame this limitation and made Gluster consumable in k8s by
providing
Post by Vijay Bellur
all the necessary hooks needed for supporting Persistent Volume
Claims.
Post by Vijay Bellur
Glusterd2 provides a service oriented architecture for volume &
cluster
Post by Vijay Bellur
management. Gd2 also intends to provide many of the Heketi
functionalities
Post by Vijay Bellur
needed by Kubernetes natively. Hence we are working on merging Heketi
with
Post by Vijay Bellur
gd2 and you can follow more of this action in the issues associated
with
Post by Vijay Bellur
the gd2 github repository.
gluster-block
Repo: https://github.com/gluster/gluster-block
This project intends to expose files in a gluster volume as block
devices.
Post by Vijay Bellur
Gluster-block enables supporting ReadWriteOnce (RWO) PVCs and the
corresponding workloads in Kubernetes using gluster as the underlying
storage technology.
Gluster-block is intended to be consumed by stateful RWO applications
like
Post by Vijay Bellur
databases and k8s infrastructure services like logging, metrics etc.
gluster-block is more preferred than file based Persistent Volumes in
K8s
Post by Vijay Bellur
for stateful/transactional workloads as it provides better
performance &
Post by Vijay Bellur
consistency guarantees.
anthill / operator
Repo: https://github.com/gluster/anthill
This project aims to add an operator for Gluster in Kubernetes.,
Since it
Post by Vijay Bellur
is relatively new, there are areas where you can contribute to make
the
Post by Vijay Bellur
operator experience better (please refer to the list of issues). This
project intends to make the whole Gluster experience in k8s much
smoother
Post by Vijay Bellur
by automatic management of operator tasks like installation, rolling
upgrades etc.
gluster-csi-driver
Repo: http://github.com/gluster/gluster-csi-driver
This project will provide CSI (Container Storage Interface) compliant
drivers for GlusterFS & gluster-block in k8s.
gluster-kubernetes
Repo: https://github.com/gluster/gluster-kubernetes
This project is intended to provide all the required installation and
management steps for getting gluster up and running in k8s.
GlusterFS
Repo: https://github.com/gluster/glusterfs
GlusterFS is the main and the core repository of Gluster. To support
storage in container world, we don’t need all the features of
Gluster.
Post by Vijay Bellur
Hence, we would be focusing on a stack which would be absolutely
required
Post by Vijay Bellur
in k8s. This would allow us to plan and execute tests well, and also
provide users with a setup which works without too many options to
tweak.
Post by Vijay Bellur
Notice that glusterfs default volumes would continue to work as of
now, but
Post by Vijay Bellur
the translator stack which is used in GCS will be much leaner and
geared to
Post by Vijay Bellur
work optimally in k8s.
Monitoring
Repo: https://github.com/gluster/gluster-prometheus
As k8s ecosystem provides its own native monitoring mechanisms, we
intend
Post by Vijay Bellur
to have this project be the placeholder for required monitoring
plugins.
Post by Vijay Bellur
The scope of this project is currently WIP and we welcome your inputs
to
Post by Vijay Bellur
shape the project.
https://lists.gluster.org/pipermail/gluster-users/2018-July/034435.html
Post by Vijay Bellur
Gluster-Containers
*Repo: https://github.com/gluster/gluster-containers
<https://github.com/gluster/gluster-containers>This repository
provides
Post by Vijay Bellur
container specs / Dockerfiles that can be used with a container
runtime
Post by Vijay Bellur
like cri-o & docker.Note that this is not an exhaustive or final list
of
Post by Vijay Bellur
projects involved with GCS. We will continue to update the project
list
Post by Vijay Bellur
depending on the new requirements and priorities that we discover in
this
Post by Vijay Bellur
journey.*
*We welcome you to join this journey by looking up the repositories
and
Post by Vijay Bellur
contributing to them. As always, we are happy to hear your thoughts
about
Post by Vijay Bellur
this initiative and please stay tuned as we provide periodic updates
about
Post by Vijay Bellur
GCS here!Regards,*
*Vijay*
_______________________________________________
Gluster-devel mailing list
https://lists.gluster.org/mailman/listinfo/gluster-devel
Michael Adam
2018-08-24 15:24:32 UTC
Permalink
Post by Joe Julian
Personally, I'd like to see the glusterd service replaced by a k8s native controller (named "kluster").
If you are exclusively interested in gluster for kubernetes
storage, this might seem the right approach. But I think
this is much too narrow. The standalone, non-k8s deployments
still are important and will be for some time.

So what we've always tried to achieve (this is my personal
very firm credo, and I think several of the other gluster
developers are on the same page), is to keep any business
logic of *how* to manage bricks, create volumes, how to do a
mount, how to grow, shrink and grow volumes and clusters,
etc... close to the core gluster project, so that these
features are usable irrespective of whether gluster is
used in kubernetes or not.

The kubernetes components just need to make use of these,
and so they can stay nicely small, too:

* The provisioners and csi drivers mainly do api translation
between k8s and gluster(heketi in the old style) and are
rather trivial.

* The operator would implement the logic "when" and "why"
to invoke the gluster operations, but should imho not
bother about the "how".

What can not be implemented with that nice separation
of responsibilies?


Thinking about this a bit more, I do actually feel
more and more that it would be wrong to put all of
gluster into k8s even if we were only interested
in k8s. And I'm really curious how you want to do
that: I think you would have to rewrite more parts
of how gluster actually works. Currently glusterd
mananges (spawns) other gluster processes. Clients
for mounting first connect to glusterd to get the
volfile and maintain a connection to glusterd
throughout the whole lifetime of the mount, etc...

Really interested to hear your thoughts about the above!


Cheers - Michael
Post by Joe Julian
I'm hoping to use this vacation I'm currently on to write up a design doc.
Post by Michael Adam
Post by Vijay Bellur
Hi all,
Hi Vijay,
Thanks for announcing this to the public and making everyone
more aware of Gluster's focus on container storage!
I would like to add an additional perspective to this,
Integrating Gluster with kubernetes for providing
persistent storage for containerized applications is
not new. We have been working on this since more than
two years now, and it is used by many community users
and and many customers (of Red Hat) in production.
The original software stack used heketi
(https://github.com/heketi/heketi) as a high level service
interface for gluster to facilitate the easy self-service for
provisioning volumes in kubernetes. Heketi implemented some ideas
that were originally part of the glusterd2 plans already in a
separate, much more narrowly scoped project to get us started
with these efforts in the first place, and also went beyond those
original ideas. These features are now being merged into
glusterd2 which will in the future replace heketi in the
container storage stack.
We were also working on kubernetes itself, writing the
privisioners for various forms of gluster volumes in kubernets
proper (https://github.com/kubernetes/kubernetes) and also the
external storage repo
(https://github.com/kubernetes-incubator/external-storage).
Those provisioners will eventually be replaced by the mentioned
csi drivers. The expertise of the original kubernetes
development is now flowing into the CSI drivers.
The gluster-containers repository was created and used
for this original container-storage effort already.
The mentioned https://github.com/gluster/gluster-kubernetes
repository was not only the place for storing the deployment
artefacts and tools, but it was actually intended to be the
upstream home of the gluster-container-storage project.
In this view, I see the GCS project announced here
as a GCS version 2. The original first version,
even though never officially announced that widely in a formal
introduction like this, and never given a formal release
or version number (let me call it version one), was the
software stack described above and homed at the
gluster-kubernetes repository. If you look at this project
(and heketi), you see that it has a nice level of popularity.
I think we should make use of this traction instead of
ignoring the legacy, and turn gluster-kubernetes into the
* replacing some of the components with newer, i.e.
- i.e. glusterd2 instead of the heketi and glusterd1 combo
- csi drivers (the new standard) instead of native
kubernetes plugins
* adding the operator feature,
(even though we are currently also working on an operator
for the current stack with heketi and traditional gluster,
since this will become important in production before
this v2 will be ready.)
These are my 2cents on this topic.
I hope someone finds them useful.
I am very excited to (finally) see the broader gluster
community getting more aligned behind the idea of bringing
our great SDS system into the space of containers! :-)
Cheers - Michael
Post by Vijay Bellur
We would like to let you know that some of us have started focusing
on an
Post by Vijay Bellur
initiative called ‘Gluster for Container Storage’ (in short GCS). As
of
Post by Vijay Bellur
now, one can already use Gluster as storage for containers by making
use of
Post by Vijay Bellur
different projects available in github repositories associated with
gluster
Post by Vijay Bellur
<https://github.com/gluster> & Heketi
<https://github.com/heketi/heketi>.
Post by Vijay Bellur
The goal of the GCS initiative is to provide an easier integration of
these
Post by Vijay Bellur
projects so that they can be consumed together as designed. We are
primarily focused on integration with Kubernetes (k8s) through this
initiative.
Glusterd2 (GD2)
Repo: https://github.com/gluster/glusterd2
The challenge we have with current management layer of Gluster
(glusterd)
Post by Vijay Bellur
is that it is not designed for a service oriented architecture.
Heketi
Post by Vijay Bellur
overcame this limitation and made Gluster consumable in k8s by
providing
Post by Vijay Bellur
all the necessary hooks needed for supporting Persistent Volume
Claims.
Post by Vijay Bellur
Glusterd2 provides a service oriented architecture for volume &
cluster
Post by Vijay Bellur
management. Gd2 also intends to provide many of the Heketi
functionalities
Post by Vijay Bellur
needed by Kubernetes natively. Hence we are working on merging Heketi
with
Post by Vijay Bellur
gd2 and you can follow more of this action in the issues associated
with
Post by Vijay Bellur
the gd2 github repository.
gluster-block
Repo: https://github.com/gluster/gluster-block
This project intends to expose files in a gluster volume as block
devices.
Post by Vijay Bellur
Gluster-block enables supporting ReadWriteOnce (RWO) PVCs and the
corresponding workloads in Kubernetes using gluster as the underlying
storage technology.
Gluster-block is intended to be consumed by stateful RWO applications
like
Post by Vijay Bellur
databases and k8s infrastructure services like logging, metrics etc.
gluster-block is more preferred than file based Persistent Volumes in
K8s
Post by Vijay Bellur
for stateful/transactional workloads as it provides better
performance &
Post by Vijay Bellur
consistency guarantees.
anthill / operator
Repo: https://github.com/gluster/anthill
This project aims to add an operator for Gluster in Kubernetes.,
Since it
Post by Vijay Bellur
is relatively new, there are areas where you can contribute to make
the
Post by Vijay Bellur
operator experience better (please refer to the list of issues). This
project intends to make the whole Gluster experience in k8s much
smoother
Post by Vijay Bellur
by automatic management of operator tasks like installation, rolling
upgrades etc.
gluster-csi-driver
Repo: http://github.com/gluster/gluster-csi-driver
This project will provide CSI (Container Storage Interface) compliant
drivers for GlusterFS & gluster-block in k8s.
gluster-kubernetes
Repo: https://github.com/gluster/gluster-kubernetes
This project is intended to provide all the required installation and
management steps for getting gluster up and running in k8s.
GlusterFS
Repo: https://github.com/gluster/glusterfs
GlusterFS is the main and the core repository of Gluster. To support
storage in container world, we don’t need all the features of
Gluster.
Post by Vijay Bellur
Hence, we would be focusing on a stack which would be absolutely
required
Post by Vijay Bellur
in k8s. This would allow us to plan and execute tests well, and also
provide users with a setup which works without too many options to
tweak.
Post by Vijay Bellur
Notice that glusterfs default volumes would continue to work as of
now, but
Post by Vijay Bellur
the translator stack which is used in GCS will be much leaner and
geared to
Post by Vijay Bellur
work optimally in k8s.
Monitoring
Repo: https://github.com/gluster/gluster-prometheus
As k8s ecosystem provides its own native monitoring mechanisms, we
intend
Post by Vijay Bellur
to have this project be the placeholder for required monitoring
plugins.
Post by Vijay Bellur
The scope of this project is currently WIP and we welcome your inputs
to
Post by Vijay Bellur
shape the project.
https://lists.gluster.org/pipermail/gluster-users/2018-July/034435.html
Post by Vijay Bellur
Gluster-Containers
*Repo: https://github.com/gluster/gluster-containers
<https://github.com/gluster/gluster-containers>This repository
provides
Post by Vijay Bellur
container specs / Dockerfiles that can be used with a container
runtime
Post by Vijay Bellur
like cri-o & docker.Note that this is not an exhaustive or final list
of
Post by Vijay Bellur
projects involved with GCS. We will continue to update the project
list
Post by Vijay Bellur
depending on the new requirements and priorities that we discover in
this
Post by Vijay Bellur
journey.*
*We welcome you to join this journey by looking up the repositories
and
Post by Vijay Bellur
contributing to them. As always, we are happy to hear your thoughts
about
Post by Vijay Bellur
this initiative and please stay tuned as we provide periodic updates
about
Post by Vijay Bellur
GCS here!Regards,*
*Vijay*
_______________________________________________
Gluster-devel mailing list
https://lists.gluster.org/mailman/listinfo/gluster-devel
Joe Julian
2018-08-24 16:25:59 UTC
Permalink
Post by Michael Adam
Post by Joe Julian
Personally, I'd like to see the glusterd service replaced by a k8s native controller (named "kluster").
If you are exclusively interested in gluster for kubernetes
storage, this might seem the right approach. But I think
this is much too narrow. The standalone, non-k8s deployments
still are important and will be for some time.
So what we've always tried to achieve (this is my personal
very firm credo, and I think several of the other gluster
developers are on the same page), is to keep any business
logic of *how* to manage bricks, create volumes, how to do a
mount, how to grow, shrink and grow volumes and clusters,
etc... close to the core gluster project, so that these
features are usable irrespective of whether gluster is
used in kubernetes or not.
The kubernetes components just need to make use of these,
* The provisioners and csi drivers mainly do api translation
between k8s and gluster(heketi in the old style) and are
rather trivial.
* The operator would implement the logic "when" and "why"
to invoke the gluster operations, but should imho not
bother about the "how".
What can not be implemented with that nice separation
of responsibilies?
Thinking about this a bit more, I do actually feel
more and more that it would be wrong to put all of
gluster into k8s even if we were only interested
in k8s. And I'm really curious how you want to do
that: I think you would have to rewrite more parts
of how gluster actually works. Currently glusterd
mananges (spawns) other gluster processes. Clients
for mounting first connect to glusterd to get the
volfile and maintain a connection to glusterd
throughout the whole lifetime of the mount, etc...
Really interested to hear your thoughts about the above!
Cheers - Michael
To be clear, I'm not saying throw away glusterd and only do gluster for
Kubernetes and nothing else. That would be silly.

On k8s, a native controller would still need to use some of what
glusterd2 does as libraries, however things like spawning processes
would be relegated to the scheduler. Glusterfsd, glustershd, gsyncd,
etc. would just be pods in the cluster (probably with affinities set for
storage localization). This allows better resource and fault management,
better logging, and better monitoring.

Through Kubernetes custom resource definitions (CRDs), volumes would
declarative and the controller would be responsible for converging the
declaration and the state. I admit this goes opposite to what some
developers in the gluster community have strong feelings about, but the
industry has been moving away from having human managed resources and
toward declarative state engines for good reason. It scales, is less
prone to error, and allows for simpler interfaces.

Volume definitions (vol files, not the CRD) could be stored in
ConfigMaps or Secrets. The client (both glusterfsd and glusterfs) could
be made k8s aware and retrieve these directly or as an easier first step
the cm/secret could be mounted into the pod and the client could load
its vol from a file (the client would need to be altered to reload the
graph if the file changes).

As an aside, the "maintained" connection to glusterd is only true as
long as glusterd always lives at the same IP address. There's a
long-standing bug where the client will never try to find another
glusterd if the one it first connected to ever goes away.

There are still a lot of questions that I don't have answers to. I think
this could be done in a way that's complementary with glusterd and does
not create a bunch of double-work. Most importantly, I think this is
something that could get community buy-in and would suit a need in
Kubernetes that's not well supported at this time.

Loading...