Discussion:
[Gluster-users] Questions on adding brick to gluster
Pat Haley
2018-07-29 19:15:53 UTC
Permalink
Hi All,

We are adding a new brick (91TB) to our existing gluster volume (328
TB). The new brick is on a new physical server and we want to make sure
that we are doing this correctly (the existing volume had 2 bricks on a
single physical server).  Both are running glusterfs 3.7.11. The steps
we've followed are

* Setup glusterfs and created brick on new server mseas-data3
* Peer probe commands in order to make sure that meas-data2 and
mseas-data3 are in a Trusted Storage Pool
* Added brick3 (lives on mseas-data3, is 91TB) to data-volume on
mseas-data2 (328TB)
* Ran *gluster volume rebalance data-volume fix-layout start *cmd
(fix-layout to NOT migrate data during process, we did not want to
yet)  Still running as of this Email

The first question we have is at what point should the additional 91 TB
be visible to the client servers? Currently when we run "df -h" on any
client we still only see the original 328 TB. Does the additional space
only appear to the clients after the rebalance using fix-layout finishes?

The follow-up question is how long should the rebalance using fix-layout
take?

Some additional information

# gluster volume info

Volume Name: data-volume
Type: Distribute
Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: mseas-data2:/mnt/brick1
Brick2: mseas-data2:/mnt/brick2
Brick3: mseas-data3:/export/sda/brick3
Options Reconfigured:
diagnostics.client-log-level: ERROR
network.inode-lru-limit: 50000
performance.md-cache-timeout: 60
performance.open-behind: off
disperse.eager-lock: off
auth.allow: *
server.allow-insecure: on
nfs.exports-auth-enable: on
diagnostics.brick-sys-log-level: WARNING
performance.readdir-ahead: on
nfs.disable: on
nfs.export-volumes: off

Thanks
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley Email: ***@mit.edu
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical Engineering Fax: (617) 253-8125
MIT, Room 5-213 http://web.mit.edu/phaley/www/
77 Massachusetts Avenue
Cambridge, MA 02139-4301
Nithya Balachandran
2018-07-30 16:43:19 UTC
Permalink
Hi,
Post by Pat Haley
Hi All,
We are adding a new brick (91TB) to our existing gluster volume (328 TB).
The new brick is on a new physical server and we want to make sure that we
are doing this correctly (the existing volume had 2 bricks on a single
physical server). Both are running glusterfs 3.7.11. The steps we've
followed are
- Setup glusterfs and created brick on new server mseas-data3
- Peer probe commands in order to make sure that meas-data2 and
mseas-data3 are in a Trusted Storage Pool
- Added brick3 (lives on mseas-data3, is 91TB) to data-volume on
mseas-data2 (328TB)
- Ran *gluster volume rebalance data-volume fix-layout start *cmd
(fix-layout to NOT migrate data during process, we did not want to yet)
Still running as of this Email
The first question we have is at what point should the additional 91 TB be
visible to the client servers? Currently when we run "df -h" on any client
we still only see the original 328 TB. Does the additional space only
appear to the clients after the rebalance using fix-layout finishes?
No, this should be available immediately. Do you see any error messages in
the client log files when running df? Does running the command from a fresh
mount succeed?
Post by Pat Haley
The follow-up question is how long should the rebalance using fix-layout
take?
This is dependent on the number of directories on the volume.

Regards,
Nithya
Post by Pat Haley
Some additional information
# gluster volume info
Volume Name: data-volume
Type: Distribute
Volume ID: c162161e-2a2d-4dac-b015-f31fd89ceb18
Status: Started
Number of Bricks: 3
Transport-type: tcp
Brick1: mseas-data2:/mnt/brick1
Brick2: mseas-data2:/mnt/brick2
Brick3: mseas-data3:/export/sda/brick3
diagnostics.client-log-level: ERROR
network.inode-lru-limit: 50000
performance.md-cache-timeout: 60
performance.open-behind: off
disperse.eager-lock: off
auth.allow: *
server.allow-insecure: on
nfs.exports-auth-enable: on
diagnostics.brick-sys-log-level: WARNING
performance.readdir-ahead: on
nfs.disable: on
nfs.export-volumes: off
Thanks
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Center for Ocean Engineering Phone: (617) 253-6824
Dept. of Mechanical Engineering Fax: (617) 253-8125
MIT, Room 5-213 http://web.mit.edu/phaley/www/
77 Massachusetts Avenue
Cambridge, MA 02139-4301
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
Loading...