Discussion:
[Gluster-users] TCP port usage
Melkor Lord
2015-03-19 08:10:54 UTC
Permalink
Hi,

To my understanding, Gluster starts volumes TCP services at port 49152 and
then increases the port number with every new volume right? I have a 3
replica test environment with only one volume "TEST"

This is what I expected :

server0 : glusterd 24007 + glusterfsd 49152
server1 : glusterd 24007 + glusterfsd 49152
server2 : glusterd 24007 + glusterfsd 49152

and this is what I really get :

server0 : glusterd 24007 + glusterfsd 49156
server1 : glusterd 24007 + glusterfsd 49152
server2 : glusterd 24007 + glusterfsd 49155

I launch all my commands (gluster volume start/stop/status/info) from
server1.

I checked the configuration files under
/var/lib/gluster/glusterd/vols/TEST/bricks and the TCP is "hardcoded" for
some servers but not for others.

server0 :
server0/listen-port=49156
server1/listen-port=0
server2/listen-port=0

server1 :
server0/listen-port=0
server1/listen-port=49155
server2/listen-port=0

server2 :
server0/listen-port=0
server1/listen-port=0
server2/listen-port=49152

What's the cause of this?

The "brick" logs on all servers say something specific :

On "server0" :
I [graph.c:269:gf_add_cmdline_options] 0-TEST-server: adding option
'listen-port' for
volume 'TEST-server' with value '49156'

On "server1" :
I [graph.c:269:gf_add_cmdline_options] 0-TEST-server: adding option
'listen-port' for
volume 'TEST-server' with value '49155'

On "server2" :
I [graph.c:269:gf_add_cmdline_options] 0-TEST-server: adding option
'listen-port' for
volume 'TEST-server' with value '49152'

I stopped the volume, I edited each brick config file on each server to set
the port to "0" and even changed "listen-port" to
"transport.socket.listen-port" (see below). Starting the volume again did
reset everything the way it was before my changes, even "listen-port", so I
guess that these "brick" files are dynamically created upon volume start.

Did I understand wrong about the port assignment or is there something off
with my test setup?

BTW, they all complain about the same thing :

W [options.c:898:xl_opt_validate] 0-TEST-server: option 'listen-port' is
deprecated, preferred is 'transport.socket.listen-port', continuing with
correction
--
Unix _IS_ user friendly, it's just selective about who its friends are.
JF Le Fillâtre
2015-03-19 08:37:45 UTC
Permalink
Hello,

I see the same things on my nodes:

stor104: /var/lib/glusterd/vols/live/bricks/stor104:-zfs-brick0-brick:listen-port=49170
stor104: /var/lib/glusterd/vols/live/bricks/stor104:-zfs-brick1-brick:listen-port=49171
stor104: /var/lib/glusterd/vols/live/bricks/stor104:-zfs-brick2-brick:listen-port=49172
stor104: /var/lib/glusterd/vols/live/bricks/stor106:-zfs-brick0-brick:listen-port=0
stor104: /var/lib/glusterd/vols/live/bricks/stor106:-zfs-brick1-brick:listen-port=0
stor104: /var/lib/glusterd/vols/live/bricks/stor106:-zfs-brick2-brick:listen-port=0

stor106: /var/lib/glusterd/vols/live/bricks/stor104:-zfs-brick0-brick:listen-port=0
stor106: /var/lib/glusterd/vols/live/bricks/stor104:-zfs-brick1-brick:listen-port=0
stor106: /var/lib/glusterd/vols/live/bricks/stor104:-zfs-brick2-brick:listen-port=0
stor106: /var/lib/glusterd/vols/live/bricks/stor106:-zfs-brick0-brick:listen-port=49162
stor106: /var/lib/glusterd/vols/live/bricks/stor106:-zfs-brick1-brick:listen-port=49163
stor106: /var/lib/glusterd/vols/live/bricks/stor106:-zfs-brick2-brick:listen-port=49164

So yes, on a given server you only see the ports for the bricks of that server. From that I can deduce that the glusterd server running on port 24007 provides the local port numbers to all other machines (servers and clients).

And it seems that brick port numbers are unique pool-wise, rather than incrementing from a same number on each machine.

Compare that with:
http://www.gluster.org/community/documentation/index.php/Basic_Gluster_Troubleshooting

It states:
"One TCP port for each brick in a volume. So, for example, if you have 4 bricks in a volume, port [...] 49152 - 49155 from GlusterFS 3.4 & later."

It seems that the starting port isn't set in stone, and that the uniqueness of the port numbers takes precedence over a linear port number sequence on a given server.

Thanks,
JF
Post by Melkor Lord
Hi,
To my understanding, Gluster starts volumes TCP services at port 49152
and then increases the port number with every new volume right? I have a
3 replica test environment with only one volume "TEST"
server0 : glusterd 24007 + glusterfsd 49152
server1 : glusterd 24007 + glusterfsd 49152
server2 : glusterd 24007 + glusterfsd 49152
server0 : glusterd 24007 + glusterfsd 49156
server1 : glusterd 24007 + glusterfsd 49152
server2 : glusterd 24007 + glusterfsd 49155
I launch all my commands (gluster volume start/stop/status/info) from
server1.
I checked the configuration files under
/var/lib/gluster/glusterd/vols/TEST/bricks and the TCP is "hardcoded"
for some servers but not for others.
server0/listen-port=49156
server1/listen-port=0
server2/listen-port=0
server0/listen-port=0
server1/listen-port=49155
server2/listen-port=0
server0/listen-port=0
server1/listen-port=0
server2/listen-port=49152
What's the cause of this?
I [graph.c:269:gf_add_cmdline_options] 0-TEST-server: adding option
'listen-port' for
volume 'TEST-server' with value '49156'
I [graph.c:269:gf_add_cmdline_options] 0-TEST-server: adding option
'listen-port' for
volume 'TEST-server' with value '49155'
I [graph.c:269:gf_add_cmdline_options] 0-TEST-server: adding option
'listen-port' for
volume 'TEST-server' with value '49152'
I stopped the volume, I edited each brick config file on each server to
set the port to "0" and even changed "listen-port" to
"transport.socket.listen-port" (see below). Starting the volume again
did reset everything the way it was before my changes, even
"listen-port", so I guess that these "brick" files are dynamically
created upon volume start.
Did I understand wrong about the port assignment or is there something
off with my test setup?
W [options.c:898:xl_opt_validate] 0-TEST-server: option 'listen-port' is
deprecated, preferred is 'transport.socket.listen-port', continuing with
correction
--
Unix _IS_ user friendly, it's just selective about who its friends are.
_______________________________________________
Gluster-users mailing list
http://www.gluster.org/mailman/listinfo/gluster-users
--
Jean-François Le Fillâtre
-------------------------------
HPC Systems Administrator
LCSB - University of Luxembourg
-------------------------------
PGP KeyID 0x134657C6
JF Le Fillâtre
2015-03-19 09:20:55 UTC
Permalink
Well I don't have a linear sequence either. My original thought was:
some bricks were added and removed and the port number counter stayed at
whatever it was before. That would explain the holes.

But I created my volume in just one command with all the bricks at once,
so obviously it's not everything.

Time to go check the sources.

[time passes...]

I don't have the logs from the time of the creation of that volume
anymore. Can you check in yours if you seen anything like this:

"base-port override: ..."

It's an info-level message, so you may not have it either.

Thanks!
JF
On Thu, Mar 19, 2015 at 9:37 AM, JF Le Fillâtre
So yes, on a given server you only see the ports for the bricks of
that server. From that I can deduce that the glusterd server running
on port 24007 provides the local port numbers to all other machines
(servers and clients).
And it seems that brick port numbers are unique pool-wise, rather
than incrementing from a same number on each machine.
http://www.gluster.org/community/documentation/index.php/Basic_Gluster_Troubleshooting
"One TCP port for each brick in a volume. So, for example, if you
have 4 bricks in a volume, port [...] 49152 - 49155 from GlusterFS
3.4 & later."
It seems that the starting port isn't set in stone, and that the
uniqueness of the port numbers takes precedence over a linear port
number sequence on a given server.
All right so my obvious question : What happened, in my case to ports
49153 and 49154?
I have 1 volume with 3 bricks, starting at 49152, I should have *53, *54
and *55 in a perfect world :-)
--
Unix _IS_ user friendly, it's just selective about who its friends are.
--
Jean-François Le Fillâtre
-------------------------------
HPC Systems Administrator
LCSB - University of Luxembourg
-------------------------------
PGP KeyID 0x134657C6
Loading...