Discussion:
[Gluster-users] one node gluster fs, volumes come up "N/A"
Jarosław Prokopowski
2018-10-16 13:31:45 UTC
Permalink
Hello,
I'm new here. I tried to google the answer but was not successful.
To give it a try I configured one node Gluster FS with 2 volumes on CentOS
7.5.1804. After server reboot the volumes start but are offline. Is there a
way to fix that so they are online after reboot?


#gluster volume status
Status of volume: os_images
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick server.example.com:/opt/os_images/br
ick N/A N/A N N/A

Task Status of Volume os_images
------------------------------------------------------------------------------
There are no active volume tasks

Status of volume: ovirtvol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick server.example.com:/opt/ovirt/brick N/A N/A N N/A

Task Status of Volume ovirtvol
------------------------------------------------------------------------------
There are no active volume tasks


After running:
gluster volume start <volume_name> force
the volumes become Online.

# gluster volume start os_images force
volume start: os_images: success
# gluster volume start ovirtvol force
volume start: ovirtvol: success
# gluster volume status
Status of volume: os_images
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick server.example.com:/opt/os_images/br
ick 49154 0 Y
4354

Task Status of Volume os_images
------------------------------------------------------------------------------
There are no active volume tasks

Status of volume: ovirtvol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick server.example.com:/opt/ovirt/brick 49155 0 Y
4846

Task Status of Volume ovirtvol
------------------------------------------------------------------------------
There are no active volume tasks


Here are details of the volumes:

# gluster volume status ovirtvol detail
Status of volume: ovirtvol
------------------------------------------------------------------------------
Brick : Brick server.example.com:/opt/ovirt/brick
TCP Port : 49155
RDMA Port : 0
Online : Y
Pid : 4846
File System : xfs
Device : /dev/mapper/centos-home
Mount Options : rw,seclabel,relatime,attr2,inode64,noquota
Inode Size : 512
Disk Space Free : 405.6GB
Total Disk Space : 421.7GB
Inode Count : 221216768
Free Inodes : 221215567

# gluster volume info ovirtvol

Volume Name: ovirtvol
Type: Distribute
Volume ID: 82b93589-0197-4ed5-a996-ffdda8d661d1
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: server.example.com:/opt/ovirt/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
storage.owner-uid: 36
storage.owner-gid: 36



# gluster volume status os_images detail
Status of volume: os_images
------------------------------------------------------------------------------
Brick : Brick server.example.com:/opt/os_images/brick
TCP Port : 49154
RDMA Port : 0
Online : Y
Pid : 4354
File System : xfs
Device : /dev/mapper/centos-home
Mount Options : rw,seclabel,relatime,attr2,inode64,noquota
Inode Size : 512
Disk Space Free : 405.6GB
Total Disk Space : 421.7GB
Inode Count : 221216768
Free Inodes : 221215567

# gluster volume info os_images

Volume Name: os_images
Type: Distribute
Volume ID: 85b5c5e6-def6-4df3-a3ab-fcd17f105713
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: server.example.com:/opt/os_images/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
storage.owner-uid: 36
storage.owner-gid: 36


Regards!
Jarek
Vijay Bellur
2018-10-17 04:30:26 UTC
Permalink
Post by Jarosław Prokopowski
Hello,
I'm new here. I tried to google the answer but was not successful.
To give it a try I configured one node Gluster FS with 2 volumes on CentOS
7.5.1804. After server reboot the volumes start but are offline. Is there a
way to fix that so they are online after reboot?
Upon server reboot, glusterd has the responsibility of re-starting bricks
for started volumes.Checking the brick & glusterd log files can help in
determining why the bricks did not start up.

HTH,
Vijay
Jarosław Prokopowski
2018-10-17 14:52:55 UTC
Permalink
Yes,

It looks like network.service fails and the glusterd.service is set up to
start after network.service. In my case networking is set up by
NetworkManager.service that starts later.
There is no dependency on NetworkManager.service in glusterd.service

# cat /usr/lib/systemd/system/glusterd.service
[Unit]
Description=GlusterFS, a clustered file-system server
Requires=rpcbind.service
After=network.target rpcbind.service
Before=network-online.target

I will investigate why network.service fails and maybe try to add
NetworkManager.service to "After" list.

Thanks
Jarek
On Tue, Oct 16, 2018 at 6:32 AM Jarosław Prokopowski <
Post by Jarosław Prokopowski
Hello,
I'm new here. I tried to google the answer but was not successful.
To give it a try I configured one node Gluster FS with 2 volumes on
CentOS 7.5.1804. After server reboot the volumes start but are offline. Is
there a way to fix that so they are online after reboot?
Upon server reboot, glusterd has the responsibility of re-starting bricks
for started volumes.Checking the brick & glusterd log files can help in
determining why the bricks did not start up.
HTH,
Vijay
Loading...