Discussion:
glusterFS 3.6.2 migrate data by remove brick command
(too old to reply)
Sander Zijlstra
2015-04-14 10:53:41 UTC
Permalink
LS,

I’m planning to decommission a few servers from my cluster, so to confirm:

Since version 3.6 the remove brick command migrates the data away from the brick being removed, right?
When I have replicated bricks (replica 2), I also need to do "remove brick <volume> replica 2 brick1 brick2 
. , right?

Last but not least, is there any way to tell how long a “remove brick” will take when it’s moving the data? I have dual 10GB ethernet between the cluster members and the brick storage is a RAID-6 set which can read 400-600MB/sec without any problems.

Met vriendelijke groet / kind regards,

Sander Zijlstra

| Linux Engineer | SURFsara | Science Park 140 | 1098XG Amsterdam | T +31 (0)6 43 99 12 47 | ***@surfsara.nl <mailto:***@surfsara.nl> | www.surfsara.nl <http://www.surfsara.nl/> |

Regular day off on friday
Jiri Hoogeveen
2015-04-14 12:11:00 UTC
Permalink
Hi Sander,
Post by Sander Zijlstra
Since version 3.6 the remove brick command migrates the data away from the brick being removed, right?
It should :)
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/pdf/Administration_Guide/Red_Hat_Storage-3-Administration_Guide-en-US.pdf page 100, is I think a good start.
I think this is the most complete documentation.
Post by Sander Zijlstra
When I have replicated bricks (replica 2), I also need to do "remove brick <volume> replica 2 brick1 brick2 
. , right?
Yes, you need to remove both replica’s at the same time.
Post by Sander Zijlstra
Last but not least, is there any way to tell how long a “remove brick” will take when it’s moving the data? I have dual 10GB ethernet between the cluster members and the brick storage is a RAID-6 set which can read 400-600MB/sec without any problems.
Depends on the size of the disk, the number of files and type of file. Network speed is less a issu, then the IO on the disks/brick.
To migratie data from one disk to a other (is like self-healing) GlusterFS will do a scan of all files on the disk, which can cause a high IO on the disk.

Because you had also some performance issues, when you added some bricks, I will expect the same issue with remove brick. So do this at night if possible.


Grtz, Jiri
Post by Sander Zijlstra
LS,
Since version 3.6 the remove brick command migrates the data away from the brick being removed, right?
When I have replicated bricks (replica 2), I also need to do "remove brick <volume> replica 2 brick1 brick2 
. , right?
Last but not least, is there any way to tell how long a “remove brick” will take when it’s moving the data? I have dual 10GB ethernet between the cluster members and the brick storage is a RAID-6 set which can read 400-600MB/sec without any problems.
Met vriendelijke groet / kind regards,
Sander Zijlstra
Regular day off on friday
_______________________________________________
Gluster-users mailing list
http://www.gluster.org/mailman/listinfo/gluster-users
Sander Zijlstra
2015-04-14 12:14:00 UTC
Permalink
Jiri,

thanks for the information, I just commented on a question about op-version
.

I upgraded all systems to 3.6.2 does this mean they all will use the correct op-version and will not revert to old style behaviour?

Met vriendelijke groet / kind regards,

Sander Zijlstra

| Linux Engineer | SURFsara | Science Park 140 | 1098XG Amsterdam | T +31 (0)6 43 99 12 47 | ***@surfsara.nl <mailto:***@surfsara.nl> | www.surfsara.nl <http://www.surfsara.nl/> |

Regular day off on friday
Post by Jiri Hoogeveen
Hi Sander,
Post by Sander Zijlstra
Since version 3.6 the remove brick command migrates the data away from the brick being removed, right?
It should :)
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/pdf/Administration_Guide/Red_Hat_Storage-3-Administration_Guide-en-US.pdf <https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/pdf/Administration_Guide/Red_Hat_Storage-3-Administration_Guide-en-US.pdf> page 100, is I think a good start.
I think this is the most complete documentation.
Post by Sander Zijlstra
When I have replicated bricks (replica 2), I also need to do "remove brick <volume> replica 2 brick1 brick2 
. , right?
Yes, you need to remove both replica’s at the same time.
Post by Sander Zijlstra
Last but not least, is there any way to tell how long a “remove brick” will take when it’s moving the data? I have dual 10GB ethernet between the cluster members and the brick storage is a RAID-6 set which can read 400-600MB/sec without any problems.
Depends on the size of the disk, the number of files and type of file. Network speed is less a issu, then the IO on the disks/brick.
To migratie data from one disk to a other (is like self-healing) GlusterFS will do a scan of all files on the disk, which can cause a high IO on the disk.
Because you had also some performance issues, when you added some bricks, I will expect the same issue with remove brick. So do this at night if possible.
Grtz, Jiri
Post by Sander Zijlstra
LS,
Since version 3.6 the remove brick command migrates the data away from the brick being removed, right?
When I have replicated bricks (replica 2), I also need to do "remove brick <volume> replica 2 brick1 brick2 
. , right?
Last but not least, is there any way to tell how long a “remove brick” will take when it’s moving the data? I have dual 10GB ethernet between the cluster members and the brick storage is a RAID-6 set which can read 400-600MB/sec without any problems.
Met vriendelijke groet / kind regards,
Sander Zijlstra
Regular day off on friday
_______________________________________________
Gluster-users mailing list
http://www.gluster.org/mailman/listinfo/gluster-users
Sander Zijlstra
2015-04-14 13:15:30 UTC
Permalink
Jiri,

thanks, I totally missed the op-version part as it’s not mentioned in the upgrade instructions as per the link you send. Actually I read that link and because I do not use quota I didn’t use that script either.

Can I update the op-version when the volume is online and currently doing a rebalance or shall I stop the rebalance, set the new op-version and then start the rebalance again?

many thanks for all the input
.

Met vriendelijke groet / kind regards,

Sander Zijlstra

| Linux Engineer | SURFsara | Science Park 140 | 1098XG Amsterdam | T +31 (0)6 43 99 12 47 | ***@surfsara.nl <mailto:***@surfsara.nl> | www.surfsara.nl <http://www.surfsara.nl/> |

Regular day off on friday
Post by Jiri Hoogeveen
Hi Sander,
If I take a look at http://www.gluster.org/community/documentation/index.php/OperatingVersions <http://www.gluster.org/community/documentation/index.php/OperatingVersions>
Then operating-version=2 is for glusterfs version 3.4, so I guess you still will be using the old style.
I think this is useful http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.6 <http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.6>
And don’t forget to upgrade the clients also.
Grtz, Jiri
Post by Sander Zijlstra
Jiri,
thanks for the information, I just commented on a question about op-version
.
I upgraded all systems to 3.6.2 does this mean they all will use the correct op-version and will not revert to old style behaviour?
Met vriendelijke groet / kind regards,
Sander Zijlstra
Regular day off on friday
Post by Jiri Hoogeveen
Hi Sander,
Post by Sander Zijlstra
Since version 3.6 the remove brick command migrates the data away from the brick being removed, right?
It should :)
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/pdf/Administration_Guide/Red_Hat_Storage-3-Administration_Guide-en-US.pdf <https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/pdf/Administration_Guide/Red_Hat_Storage-3-Administration_Guide-en-US.pdf> page 100, is I think a good start.
I think this is the most complete documentation.
Post by Sander Zijlstra
When I have replicated bricks (replica 2), I also need to do "remove brick <volume> replica 2 brick1 brick2 
. , right?
Yes, you need to remove both replica’s at the same time.
Post by Sander Zijlstra
Last but not least, is there any way to tell how long a “remove brick” will take when it’s moving the data? I have dual 10GB ethernet between the cluster members and the brick storage is a RAID-6 set which can read 400-600MB/sec without any problems.
Depends on the size of the disk, the number of files and type of file. Network speed is less a issu, then the IO on the disks/brick.
To migratie data from one disk to a other (is like self-healing) GlusterFS will do a scan of all files on the disk, which can cause a high IO on the disk.
Because you had also some performance issues, when you added some bricks, I will expect the same issue with remove brick. So do this at night if possible.
Grtz, Jiri
Post by Sander Zijlstra
LS,
Since version 3.6 the remove brick command migrates the data away from the brick being removed, right?
When I have replicated bricks (replica 2), I also need to do "remove brick <volume> replica 2 brick1 brick2 
. , right?
Last but not least, is there any way to tell how long a “remove brick” will take when it’s moving the data? I have dual 10GB ethernet between the cluster members and the brick storage is a RAID-6 set which can read 400-600MB/sec without any problems.
Met vriendelijke groet / kind regards,
Sander Zijlstra
Regular day off on friday
_______________________________________________
Gluster-users mailing list
http://www.gluster.org/mailman/listinfo/gluster-users <http://www.gluster.org/mailman/listinfo/gluster-users>
Jiri Hoogeveen
2015-04-16 06:58:42 UTC
Permalink
Hi Sander,

Sorry for not getting back to you.

I guess, when you don’t use quota you do not need to run the scripts.

I do not have any experience changing the op-version on a running glusterfs cluster. But looking at some threads, it should be possible to change it on a running glusterfs cluster. But I think, only when all clients are the same version has the server.

And good luck this weekend.

Grtz, Jiri
Post by Sander Zijlstra
Jiri,
thanks, I totally missed the op-version part as it’s not mentioned in the upgrade instructions as per the link you send. Actually I read that link and because I do not use quota I didn’t use that script either.
Can I update the op-version when the volume is online and currently doing a rebalance or shall I stop the rebalance, set the new op-version and then start the rebalance again?
many thanks for all the input
.
Met vriendelijke groet / kind regards,
Sander Zijlstra
Regular day off on friday
Post by Jiri Hoogeveen
Hi Sander,
If I take a look at http://www.gluster.org/community/documentation/index.php/OperatingVersions <http://www.gluster.org/community/documentation/index.php/OperatingVersions>
Then operating-version=2 is for glusterfs version 3.4, so I guess you still will be using the old style.
I think this is useful http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.6 <http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.6>
And don’t forget to upgrade the clients also.
Grtz, Jiri
Post by Sander Zijlstra
Jiri,
thanks for the information, I just commented on a question about op-version
.
I upgraded all systems to 3.6.2 does this mean they all will use the correct op-version and will not revert to old style behaviour?
Met vriendelijke groet / kind regards,
Sander Zijlstra
Regular day off on friday
Post by Jiri Hoogeveen
Hi Sander,
Post by Sander Zijlstra
Since version 3.6 the remove brick command migrates the data away from the brick being removed, right?
It should :)
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/pdf/Administration_Guide/Red_Hat_Storage-3-Administration_Guide-en-US.pdf <https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/pdf/Administration_Guide/Red_Hat_Storage-3-Administration_Guide-en-US.pdf> page 100, is I think a good start.
I think this is the most complete documentation.
Post by Sander Zijlstra
When I have replicated bricks (replica 2), I also need to do "remove brick <volume> replica 2 brick1 brick2 
. , right?
Yes, you need to remove both replica’s at the same time.
Post by Sander Zijlstra
Last but not least, is there any way to tell how long a “remove brick” will take when it’s moving the data? I have dual 10GB ethernet between the cluster members and the brick storage is a RAID-6 set which can read 400-600MB/sec without any problems.
Depends on the size of the disk, the number of files and type of file. Network speed is less a issu, then the IO on the disks/brick.
To migratie data from one disk to a other (is like self-healing) GlusterFS will do a scan of all files on the disk, which can cause a high IO on the disk.
Because you had also some performance issues, when you added some bricks, I will expect the same issue with remove brick. So do this at night if possible.
Grtz, Jiri
Post by Sander Zijlstra
LS,
Since version 3.6 the remove brick command migrates the data away from the brick being removed, right?
When I have replicated bricks (replica 2), I also need to do "remove brick <volume> replica 2 brick1 brick2 
. , right?
Last but not least, is there any way to tell how long a “remove brick” will take when it’s moving the data? I have dual 10GB ethernet between the cluster members and the brick storage is a RAID-6 set which can read 400-600MB/sec without any problems.
Met vriendelijke groet / kind regards,
Sander Zijlstra
Regular day off on friday
_______________________________________________
Gluster-users mailing list
http://www.gluster.org/mailman/listinfo/gluster-users <http://www.gluster.org/mailman/listinfo/gluster-users>
Sander Zijlstra
2015-04-16 12:48:34 UTC
Permalink
Jiri,

I’ve updated the op-version yesterday online without any problems, so I hope to be able to migrate my old bricks to the new tomorrow night without hassle using the remove brick command once all new bricks are added.

My new bricks are smaller than the current ones but higher in number so I could’t use the replace brick in any case


thanks for the support
.

Met vriendelijke groet / kind regards,

Sander Zijlstra

| Linux Engineer | SURFsara | Science Park 140 | 1098XG Amsterdam | T +31 (0)6 43 99 12 47 | ***@surfsara.nl <mailto:***@surfsara.nl> | www.surfsara.nl <http://www.surfsara.nl/> |

Regular day off on friday
Post by Jiri Hoogeveen
Hi Sander,
Sorry for not getting back to you.
I guess, when you don’t use quota you do not need to run the scripts.
I do not have any experience changing the op-version on a running glusterfs cluster. But looking at some threads, it should be possible to change it on a running glusterfs cluster. But I think, only when all clients are the same version has the server.
And good luck this weekend.
Grtz, Jiri
Post by Sander Zijlstra
Jiri,
thanks, I totally missed the op-version part as it’s not mentioned in the upgrade instructions as per the link you send. Actually I read that link and because I do not use quota I didn’t use that script either.
Can I update the op-version when the volume is online and currently doing a rebalance or shall I stop the rebalance, set the new op-version and then start the rebalance again?
many thanks for all the input
.
Met vriendelijke groet / kind regards,
Sander Zijlstra
Regular day off on friday
Post by Jiri Hoogeveen
Hi Sander,
If I take a look at http://www.gluster.org/community/documentation/index.php/OperatingVersions <http://www.gluster.org/community/documentation/index.php/OperatingVersions>
Then operating-version=2 is for glusterfs version 3.4, so I guess you still will be using the old style.
I think this is useful http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.6 <http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.6>
And don’t forget to upgrade the clients also.
Grtz, Jiri
Post by Sander Zijlstra
Jiri,
thanks for the information, I just commented on a question about op-version
.
I upgraded all systems to 3.6.2 does this mean they all will use the correct op-version and will not revert to old style behaviour?
Met vriendelijke groet / kind regards,
Sander Zijlstra
Regular day off on friday
Post by Jiri Hoogeveen
Hi Sander,
Post by Sander Zijlstra
Since version 3.6 the remove brick command migrates the data away from the brick being removed, right?
It should :)
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/pdf/Administration_Guide/Red_Hat_Storage-3-Administration_Guide-en-US.pdf <https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/pdf/Administration_Guide/Red_Hat_Storage-3-Administration_Guide-en-US.pdf> page 100, is I think a good start.
I think this is the most complete documentation.
Post by Sander Zijlstra
When I have replicated bricks (replica 2), I also need to do "remove brick <volume> replica 2 brick1 brick2 
. , right?
Yes, you need to remove both replica’s at the same time.
Post by Sander Zijlstra
Last but not least, is there any way to tell how long a “remove brick” will take when it’s moving the data? I have dual 10GB ethernet between the cluster members and the brick storage is a RAID-6 set which can read 400-600MB/sec without any problems.
Depends on the size of the disk, the number of files and type of file. Network speed is less a issu, then the IO on the disks/brick.
To migratie data from one disk to a other (is like self-healing) GlusterFS will do a scan of all files on the disk, which can cause a high IO on the disk.
Because you had also some performance issues, when you added some bricks, I will expect the same issue with remove brick. So do this at night if possible.
Grtz, Jiri
Post by Sander Zijlstra
LS,
Since version 3.6 the remove brick command migrates the data away from the brick being removed, right?
When I have replicated bricks (replica 2), I also need to do "remove brick <volume> replica 2 brick1 brick2 
. , right?
Last but not least, is there any way to tell how long a “remove brick” will take when it’s moving the data? I have dual 10GB ethernet between the cluster members and the brick storage is a RAID-6 set which can read 400-600MB/sec without any problems.
Met vriendelijke groet / kind regards,
Sander Zijlstra
Regular day off on friday
_______________________________________________
Gluster-users mailing list
http://www.gluster.org/mailman/listinfo/gluster-users <http://www.gluster.org/mailman/listinfo/gluster-users>
Continue reading on narkive:
Loading...