Discussion:
[Gluster-users] Can't stop or delete volume
Gerald Brandt
2012-01-05 14:10:33 UTC
Permalink
Hi,

I can't stop or delete a replica volume:

# gluster volume info

Volume Name: sync1
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: thinkpad:/gluster/export
Brick2: quad:/raid/gluster/export

# gluster volume stop sync1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
Volume sync1 does not exist

# gluster volume delete sync1
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
Volume sync1 has been started.Volume needs to be stopped before deletion.


Any ideas? I had to re-install the peers OS, and now the peer has no knowledge of the other system (or the volume).

Gerald
G***@toyota-europe.com
2012-01-05 14:24:47 UTC
Permalink
Hi Gerald,

The volume and peer information is by default stored in /etc/glusterd
which I guess was wiped when you reinstalled.

This should help restore the correct settings
http://europe.gluster.org/community/documentation/index.php/Gluster_3.2:_Brick_Restoration_-_Replace_Crashed_Server


Alternatively I believe you can delete some files in /etc/glusterd
if you don't want to restore the volume.
Just not sure which ones :)

Best regards,

Gabriel
Post by Gerald Brandt
Hi,
# gluster volume info
Volume Name: sync1
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Brick1: thinkpad:/gluster/export
Brick2: quad:/raid/gluster/export
# gluster volume stop sync1
Stopping volume will make its data inaccessible. Do you want to
continue? (y/n) y
Volume sync1 does not exist
# gluster volume delete sync1
Deleting volume will erase all information about the volume. Do you
want to continue? (y/n) y
Volume sync1 has been started.Volume needs to be stopped before
deletion.
Post by Gerald Brandt
Any ideas? I had to re-install the peers OS, and now the peer has
no knowledge of the other system (or the volume).
Gerald
_______________________________________________
Gluster-users mailing list
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Gerald Brandt
2012-01-05 16:27:27 UTC
Permalink
Thanks for the pointer.

Step 1 fails to return a UUID. If I 'gluster peer status' on the working server, I get:

Number of Peers: 1

Hostname: quad
Uuid: 03eaca98-ac4f-4b61-9b30-7e1b40b01d9b
State: Peer in Cluster (Connected)

I then go to step 2 with the shown UUID.

I start glusterd on the failed server and peer probe the working server:

***@quad:~# /etc/init.d/glusterd start
* Starting glusterd service glusterd [ OK ]
***@quad:~# gluster peer probe thinkpad
thinkpad is already part of another cluster
***@quad:~# gluster peer status
No peers present
***@quad:~#

Stuck again.

Gerald

----- Original Message -----
Post by G***@toyota-europe.com
Hi Gerald,
The volume and peer information is by default stored in /etc/glusterd
which I guess was wiped when you reinstalled.
This should help restore the correct settings
http://europe.gluster.org/community/documentation/index.php/Gluster_3.2:_Brick_Restoration_-_Replace_Crashed_Server
Alternatively I believe you can delete some files in /etc/glusterd
if you don't want to restore the volume.
Just not sure which ones :)
Best regards,
Gabriel
Post by Gerald Brandt
Hi,
# gluster volume info
Volume Name: sync1
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Brick1: thinkpad:/gluster/export
Brick2: quad:/raid/gluster/export
# gluster volume stop sync1
Stopping volume will make its data inaccessible. Do you want to
continue? (y/n) y
Volume sync1 does not exist
# gluster volume delete sync1
Deleting volume will erase all information about the volume. Do you
want to continue? (y/n) y
Volume sync1 has been started.Volume needs to be stopped before
deletion.
Any ideas? I had to re-install the peers OS, and now the peer has
no knowledge of the other system (or the volume).
Gerald
_______________________________________________
Gluster-users mailing list
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Gerald Brandt
2012-01-05 17:05:24 UTC
Permalink
Hi,

I just ended up rebuilding my bricks (by removed glusterfs completely, including config files, and reinstalling from scratch).

Gerald


----- Original Message -----
From: "Gerald Brandt" <***@majentis.com>
To: "Gabriel Othmezouri" <***@toyota-europe.com>
Cc: gluster-***@gluster.org
Sent: Thursday, January 5, 2012 10:27:27 AM
Subject: Re: [Gluster-users] Can't stop or delete volume



Thanks for the pointer.

Step 1 fails to return a UUID. If I 'gluster peer status' on the working server, I get:

Number of Peers: 1

Hostname: quad
Uuid: 03eaca98-ac4f-4b61-9b30-7e1b40b01d9b
State: Peer in Cluster (Connected)

I then go to step 2 with the shown UUID.

I start glusterd on the failed server and peer probe the working server:

***@quad:~# /etc/init.d/glusterd start
* Starting glusterd service glusterd [ OK ]
***@quad:~# gluster peer probe thinkpad
thinkpad is already part of another cluster
***@quad:~# gluster peer status
No peers present
***@quad:~#

Stuck again.

Gerald







Hi Gerald,

The volume and peer information is by default stored in /etc/glusterd
which I guess was wiped when you reinstalled.

This should help restore the correct settings
http://europe.gluster.org/community/documentation/index.php/Gluster_3.2:_Brick_Restoration_-_Replace_Crashed_Server

Alternatively I believe you can delete some files in /etc/glusterd
if you don't want to restore the volume.
Just not sure which ones :)

Best regards,

Gabriel
Post by Gerald Brandt
Hi,
# gluster volume info
Volume Name: sync1
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Brick1: thinkpad:/gluster/export
Brick2: quad:/raid/gluster/export
# gluster volume stop sync1
Stopping volume will make its data inaccessible. Do you want to
continue? (y/n) y
Volume sync1 does not exist
# gluster volume delete sync1
Deleting volume will erase all information about the volume. Do you
want to continue? (y/n) y
Volume sync1 has been started.Volume needs to be stopped before deletion.
Any ideas? I had to re-install the peers OS, and now the peer has
no knowledge of the other system (or the volume).
Gerald
_______________________________________________
Gluster-users mailing list
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Loading...