Discussion:
[Gluster-users] Ceph or Gluster for implementing big NAS
Premysl Kouril
2018-11-12 11:51:21 UTC
Permalink
Hi,

We are planning to build NAS solution which will be primarily used via NFS
and CIFS and workloads ranging from various archival application to more
“real-time processing”. The NAS will not be used as a block storage for
virtual machines, so the access really will always be file oriented.

We are considering primarily two designs and I’d like to kindly ask for any
thoughts, views, insights, experiences.

Both designs utilize “distributed storage software at some level”. Both
designs would be built from commodity servers and should scale as we grow.
Both designs involve virtualization for instantiating "access virtual
machines" which will be serving the NFS and CIFS protocol - so in this
sense the access layer is decoupled from the data layer itself.

First design is based on a distributed filesystem like Gluster or CephFS.
We would deploy this software on those commodity servers and mount the
resultant filesystem on the “access virtual machines” and they would be
serving the mounted filesystem via NFS/CIFS.

Second design is based on distributed block storage using CEPH. So we would
build distributed block storage on those commodity servers, and then, via
virtualization (like OpenStack Cinder) we would allocate the block storage
into the access VM. Inside the access VM we would deploy ZFS which would
aggregate block storage into a single filesystem. And this filesystem would
be served via NFS/CIFS from the very same VM.

Any advices and insights highly appreciated

Cheers,

Prema
Alex Crow
2018-11-12 12:22:47 UTC
Permalink
Post by Premysl Kouril
Hi,
We are planning to build NAS solution which will be primarily used via
NFS and CIFS and workloads ranging from various archival application
to more “real-time processing”. The NAS will not be used as a block
storage for virtual machines, so the access really will always be file
oriented.
We are considering primarily two designs and I’d like to kindly ask
for any thoughts, views, insights, experiences.
Both designs utilize “distributed storage software at some level”.
Both designs would be built from commodity servers and should scale as
we grow. Both designs involve virtualization for instantiating "access
virtual machines" which will be serving the NFS and CIFS protocol - so
in this sense the access layer is decoupled from the data layer itself.
First design is based on a distributed filesystem like Gluster or
CephFS. We would deploy this software on those commodity servers and
mount the resultant filesystem on the “access virtual machines” and
they would be serving the mounted filesystem via NFS/CIFS.
Second design is based on distributed block storage using CEPH. So we
would build distributed block storage on those commodity servers, and
then, via virtualization (like OpenStack Cinder) we would allocate the
block storage into the access VM. Inside the access VM we would deploy
ZFS which would aggregate block storage into a single filesystem. And
this filesystem would be served via NFS/CIFS from the very same VM.
Any advices and insights highly appreciated
Cheers,
Prema
For just NAS, I'd suggest looking at some of the other Distributed File
System projects such as MooseFS, LizardFS, BeeGFS (open source),
weka.io, Exablox (proprietary), etc. They are perhaps more suited to a
general purpose, unstructured NAS use with a mix of file sizes and
workloads. GlusterFS would work but we found it only gave good enough
performance on large files (>10MB) and was too slow with directories
containing more that a thousand or so files.

Cheers

Alex


--
This message is intended only for the addressee and may contain
confidential information. Unless you are that person, you may not
disclose its contents or use it in any way and are requested to delete
the message along with any attachments and notify us immediately.
This email is not intended to, nor should it be taken to, constitute advice.
The information provided is correct to our knowledge & belief and must not
be used as a substitute for obtaining tax, regulatory, investment, legal or
any other appropriate advice.

"Transact" is operated by Integrated Financial Arrangements Ltd.
29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300.
(Registered office: as above; Registered in England and Wales under
number: 3727592). Authorised and regulated by the Financial Conduct
Authority (entered on the Financial Services Register; no. 190856).
Vlad Kopylov
2018-11-12 19:22:18 UTC
Permalink
Good thing about gluster is that you have files as files. Whatever happens
good old file access is still there - if you need backup, or rebuilding
volumes - every replica brick has your files.
As a contrary to object blue..something storage with separate metadata, if
it gets lost/mixed you will be recovering it with magnifying glass...

If you go with monster VM approach - hypervisor uses gfapi which is little
faster then ceph on all simple tests. In really distributed environments
ceph (multiple buildings or datacenters) read performance of ceph will kill
the cluster.
Ceph CPU and Mem consumption will surprise you against Gluster as well.

For local FILE NAS (everything sitting in one room) something like BeeGFS
LizardFS would be a best option.

v
Post by Premysl Kouril
Hi,
We are planning to build NAS solution which will be primarily used via NFS
and CIFS and workloads ranging from various archival application to more
“real-time processing”. The NAS will not be used as a block storage for
virtual machines, so the access really will always be file oriented.
We are considering primarily two designs and I’d like to kindly ask for
any thoughts, views, insights, experiences.
Both designs utilize “distributed storage software at some level”. Both
designs would be built from commodity servers and should scale as we grow.
Both designs involve virtualization for instantiating "access virtual
machines" which will be serving the NFS and CIFS protocol - so in this
sense the access layer is decoupled from the data layer itself.
First design is based on a distributed filesystem like Gluster or CephFS.
We would deploy this software on those commodity servers and mount the
resultant filesystem on the “access virtual machines” and they would be
serving the mounted filesystem via NFS/CIFS.
Second design is based on distributed block storage using CEPH. So we
would build distributed block storage on those commodity servers, and then,
via virtualization (like OpenStack Cinder) we would allocate the block
storage into the access VM. Inside the access VM we would deploy ZFS which
would aggregate block storage into a single filesystem. And this filesystem
would be served via NFS/CIFS from the very same VM.
Any advices and insights highly appreciated
Cheers,
Prema
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
Loading...