Discussion:
[Gluster-users] posix_handle_hard [file exists]
Jose V. Carrión
2018-10-01 10:36:54 UTC
Permalink
Hi,

I have a gluster 3.12.6-1 installation with 2 configured volumes.

Several times at day , some bricks are reporting the lines below:

[2018-09-30 20:36:27.348015] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-volumedisk0-posix: link
/mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 ->
/mnt/glusterfs/vol0/brick1/.glusterfs/3b/1c/3b1c5fe1-b141-4687-8eaf-2c28f9505277failed
[File exists]
[2018-09-30 20:36:27.383957] E [MSGID: 113020] [posix.c:3162:posix_create]
0-volumedisk0-posix: setting gfid on
/mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 failed

I can access to the /mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5
and
/mnt/glusterfs/vol0/brick1/.glusterfs/3b/1c/3b1c5fe1-b141-4687-8eaf-2c28f9505277,
both files are hard links .

What is the meaning of the error lines?

Thanks in advance.

Cheers.
Jorick Astrego
2018-10-31 11:22:02 UTC
Permalink
Hi,

I have the similar issues with ovirt 4.2 on a glusterfs-3.8.15 cluster.
This was a new volume and I created first a thin provisioned disk, then
I tried to create a preallocated disk but it hangs after 4MB. The only
issue I can find in the logs sofar are the [File exists] errors with the
sharding.


The message "W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125365
->
/data/hdd2/brick1/.glusterfs/16/a1/16a18a01-4f77-4c37-923d-9f0bc59f5cc7failed 
[File exists]" repeated 2 times between [2018-10-31 10:46:33.810987]
and [2018-10-31 10:46:33.810988]
[2018-10-31 10:46:33.970949] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366
->
/data/hdd2/brick1/.glusterfs/90/85/9085ea11-4089-4d10-8848-fa2d518fd86dfailed 
[File exists]
[2018-10-31 10:46:33.970950] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366
->
/data/hdd2/brick1/.glusterfs/90/85/9085ea11-4089-4d10-8848-fa2d518fd86dfailed 
[File exists]
[2018-10-31 10:46:35.601064] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125369
->
/data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8failed 
[File exists]
[2018-10-31 10:46:35.601065] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125369
->
/data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8failed 
[File exists]
[2018-10-31 10:46:36.040564] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125370
->
/data/hdd2/brick1/.glusterfs/30/93/3093fdb6-e62c-48b8-90e7-d4d72036fb69failed 
[File exists]
[2018-10-31 10:46:36.040565] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125370
->
/data/hdd2/brick1/.glusterfs/30/93/3093fdb6-e62c-48b8-90e7-d4d72036fb69failed 
[File exists]
[2018-10-31 10:46:36.319247] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372
->
/data/hdd2/brick1/.glusterfs/c3/c2/c3c272f5-50af-4e82-94bb-b76eaa7a9a39failed 
[File exists]
[2018-10-31 10:46:36.319250] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372
->
/data/hdd2/brick1/.glusterfs/c3/c2/c3c272f5-50af-4e82-94bb-b76eaa7a9a39failed 
[File exists]
[2018-10-31 10:46:36.319309] E [MSGID: 113020]
[posix.c:1407:posix_mknod] 0-hdd2-posix: setting gfid on
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372
failed

   
        -rw-rw----. 2 root root 4194304 Oct 31 11:46
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366

-rw-rw----. 2 root root 4194304 Oct 31 11:46
/data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8
Post by Jose V. Carrión
Hi,
I have a gluster 3.12.6-1 installation with 2 configured volumes.
[2018-09-30 20:36:27.348015] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-volumedisk0-posix: link
/mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 ->
/mnt/glusterfs/vol0/brick1/.glusterfs/3b/1c/3b1c5fe1-b141-4687-8eaf-2c28f9505277failed 
[File exists]
[2018-09-30 20:36:27.383957] E [MSGID: 113020]
[posix.c:3162:posix_create] 0-volumedisk0-posix: setting gfid on
/mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 failed
I can access to the
/mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 and
/mnt/glusterfs/vol0/brick1/.glusterfs/3b/1c/3b1c5fe1-b141-4687-8eaf-2c28f9505277,
both files are hard links .
What is the meaning of the error lines?
Thanks in advance.
Cheers.
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts

----------------

Tel: 053 20 30 270 ***@netbulae.eu Staalsteden 4-3A KvK 08198180
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01

----------------
Krutika Dhananjay
2018-10-31 12:10:37 UTC
Permalink
These log messages represent a transient state and are harmless and can be
ignored. This happens when a lookup and mknod to create shards happen in
parallel.

Regarding the preallocated disk creation issue, could you check if there
are any errors/warnings in the fuse mount logs (these are named as the
hyphenated mountpoint name followed by a ".log" and are found under
/var/log/glusterfs).

-Krutika
Post by Jorick Astrego
Hi,
I have the similar issues with ovirt 4.2 on a glusterfs-3.8.15 cluster.
This was a new volume and I created first a thin provisioned disk, then I
tried to create a preallocated disk but it hangs after 4MB. The only issue
I can find in the logs sofar are the [File exists] errors with the sharding.
The message "W [MSGID: 113096] [posix-handle.c:761:posix_handle_hard]
0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125365 ->
/data/hdd2/brick1/.glusterfs/16/a1/16a18a01-4f77-4c37-923d-9f0bc59f5cc7failed
[File exists]" repeated 2 times between [2018-10-31 10:46:33.810987] and
[2018-10-31 10:46:33.810988]
[2018-10-31 10:46:33.970949] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366 ->
/data/hdd2/brick1/.glusterfs/90/85/9085ea11-4089-4d10-8848-fa2d518fd86dfailed
[File exists]
[2018-10-31 10:46:33.970950] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366 ->
/data/hdd2/brick1/.glusterfs/90/85/9085ea11-4089-4d10-8848-fa2d518fd86dfailed
[File exists]
[2018-10-31 10:46:35.601064] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125369 ->
/data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8failed
[File exists]
[2018-10-31 10:46:35.601065] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125369 ->
/data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8failed
[File exists]
[2018-10-31 10:46:36.040564] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125370 ->
/data/hdd2/brick1/.glusterfs/30/93/3093fdb6-e62c-48b8-90e7-d4d72036fb69failed
[File exists]
[2018-10-31 10:46:36.040565] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125370 ->
/data/hdd2/brick1/.glusterfs/30/93/3093fdb6-e62c-48b8-90e7-d4d72036fb69failed
[File exists]
[2018-10-31 10:46:36.319247] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372 ->
/data/hdd2/brick1/.glusterfs/c3/c2/c3c272f5-50af-4e82-94bb-b76eaa7a9a39failed
[File exists]
[2018-10-31 10:46:36.319250] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372 ->
/data/hdd2/brick1/.glusterfs/c3/c2/c3c272f5-50af-4e82-94bb-b76eaa7a9a39failed
[File exists]
[2018-10-31 10:46:36.319309] E [MSGID: 113020] [posix.c:1407:posix_mknod]
0-hdd2-posix: setting gfid on
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372 failed
-rw-rw----. 2 root root 4194304 Oct 31 11:46
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366
-rw-rw----. 2 root root 4194304 Oct 31 11:46
/data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8
Hi,
I have a gluster 3.12.6-1 installation with 2 configured volumes.
[2018-09-30 20:36:27.348015] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-volumedisk0-posix: link
/mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 ->
/mnt/glusterfs/vol0/brick1/.glusterfs/3b/1c/3b1c5fe1-b141-4687-8eaf-2c28f9505277failed
[File exists]
[2018-09-30 20:36:27.383957] E [MSGID: 113020] [posix.c:3162:posix_create]
0-volumedisk0-posix: setting gfid on
/mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 failed
I can access to the /mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5
and
/mnt/glusterfs/vol0/brick1/.glusterfs/3b/1c/3b1c5fe1-b141-4687-8eaf-2c28f9505277,
both files are hard links .
What is the meaning of the error lines?
Thanks in advance.
Cheers.
_______________________________________________
Met vriendelijke groet, With kind regards,
Jorick Astrego
*Netbulae Virtualization Experts *
------------------------------
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
------------------------------
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
Jorick Astrego
2018-11-05 14:53:41 UTC
Permalink
Hi Krutika,

Thanks for the info.

After a long time the preallocated disk has been created properly. It
was a 1TB disk on a hdd pool so a bit of delay was expected.

But it took a bit longer then expected. The disk had no other virtual
disks on it. Is there something I can tweak or check for this?

Regards, Jorick
Post by Krutika Dhananjay
These log messages represent a transient state and are harmless and
can be ignored. This happens when a lookup and mknod to create shards
happen in parallel.
Regarding the preallocated disk creation issue, could you check if
there are any errors/warnings in the fuse mount logs (these are named
as the hyphenated mountpoint name followed by a ".log" and are found
under /var/log/glusterfs).
-Krutika
Hi,
I have the similar issues with ovirt 4.2 on a glusterfs-3.8.15
cluster. This was a new volume and I created first a thin
provisioned disk, then I tried to create a preallocated disk but
it hangs after 4MB. The only issue I can find in the logs sofar
are the [File exists] errors with the sharding.
The message "W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125365 ->
/data/hdd2/brick1/.glusterfs/16/a1/16a18a01-4f77-4c37-923d-9f0bc59f5cc7failed 
[File exists]" repeated 2 times between [2018-10-31
10:46:33.810987] and [2018-10-31 10:46:33.810988]
[2018-10-31 10:46:33.970949] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366 ->
/data/hdd2/brick1/.glusterfs/90/85/9085ea11-4089-4d10-8848-fa2d518fd86dfailed 
[File exists]
[2018-10-31 10:46:33.970950] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366 ->
/data/hdd2/brick1/.glusterfs/90/85/9085ea11-4089-4d10-8848-fa2d518fd86dfailed 
[File exists]
[2018-10-31 10:46:35.601064] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125369 ->
/data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8failed 
[File exists]
[2018-10-31 10:46:35.601065] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125369 ->
/data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8failed 
[File exists]
[2018-10-31 10:46:36.040564] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125370 ->
/data/hdd2/brick1/.glusterfs/30/93/3093fdb6-e62c-48b8-90e7-d4d72036fb69failed 
[File exists]
[2018-10-31 10:46:36.040565] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125370 ->
/data/hdd2/brick1/.glusterfs/30/93/3093fdb6-e62c-48b8-90e7-d4d72036fb69failed 
[File exists]
[2018-10-31 10:46:36.319247] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372 ->
/data/hdd2/brick1/.glusterfs/c3/c2/c3c272f5-50af-4e82-94bb-b76eaa7a9a39failed 
[File exists]
[2018-10-31 10:46:36.319250] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372 ->
/data/hdd2/brick1/.glusterfs/c3/c2/c3c272f5-50af-4e82-94bb-b76eaa7a9a39failed 
[File exists]
[2018-10-31 10:46:36.319309] E [MSGID: 113020]
[posix.c:1407:posix_mknod] 0-hdd2-posix: setting gfid on
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372 failed
   
        -rw-rw----. 2 root root 4194304 Oct 31 11:46
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366
-rw-rw----. 2 root root 4194304 Oct 31 11:46
/data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8
Post by Jose V. Carrión
Hi,
I have a gluster 3.12.6-1 installation with 2 configured volumes.
[2018-09-30 20:36:27.348015] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-volumedisk0-posix: link
/mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 ->
/mnt/glusterfs/vol0/brick1/.glusterfs/3b/1c/3b1c5fe1-b141-4687-8eaf-2c28f9505277failed 
[File exists]
[2018-09-30 20:36:27.383957] E [MSGID: 113020]
[posix.c:3162:posix_create] 0-volumedisk0-posix: setting gfid on
/mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 failed
I can access to the
/mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 and
/mnt/glusterfs/vol0/brick1/.glusterfs/3b/1c/3b1c5fe1-b141-4687-8eaf-2c28f9505277,
both files are hard links .
What is the meaning of the error lines?
Thanks in advance.
Cheers.
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
Met vriendelijke groet, With kind regards,
Jorick Astrego
*
Netbulae Virtualization Experts *
------------------------------------------------------------------------
Staalsteden 4-3A KvK 08198180
Fax: 053 20 30 271 www.netbulae.eu <http://www.netbulae.eu> 7547
TA Enschede BTW NL821234584B01
------------------------------------------------------------------------
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts

----------------

Tel: 053 20 30 270 ***@netbulae.eu Staalsteden 4-3A KvK 08198180
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01

----------------
Jorick Astrego
2018-11-05 15:00:44 UTC
Permalink
I see a lot of DHT warnings in
rhev-data-center-mnt-glusterSD-192.168.99.14:_hdd2.log:

[2018-10-21 01:24:01.413126] I
[glusterfsd-mgmt.c:1600:mgmt_getspec_cbk] 0-glusterfs: No change in
volfile, continuing
[2018-11-01 12:48:32.537621] I [MSGID: 109066]
[dht-rename.c:1569:dht_rename] 0-hdd2-dht: renaming
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/b48f6dcc-8fbc-4eb2-bb8b-7a3e03e72899/73655cd8-adfc-404a-8ef7-7bbaee9d43d0.meta.new
(hash=hdd2-replicate-0/cache=hdd2-replicate-0) =>
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/b48f6dcc-8fbc-4eb2-bb8b-7a3e03e72899/73655cd8-adfc-404a-8ef7-7bbaee9d43d0.meta
(hash=hdd2-replicate-0/cache=<nul>)
[2018-11-01 13:31:17.726431] I [MSGID: 109066]
[dht-rename.c:1569:dht_rename] 0-hdd2-dht: renaming
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/f43f4dbd-14ca-49fb-98da-cb32c26b05b7/16ac297a-14e1-43d3-a4d9-f2b8d183c1e1.meta.new
(hash=hdd2-replicate-0/cache=hdd2-replicate-0) =>
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/f43f4dbd-14ca-49fb-98da-cb32c26b05b7/16ac297a-14e1-43d3-a4d9-f2b8d183c1e1.meta
(hash=hdd2-replicate-0/cache=hdd2-replicate-0)
[2018-11-01 13:31:18.316010] I [MSGID: 109066]
[dht-rename.c:1569:dht_rename] 0-hdd2-dht: renaming
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/72bcb410-b20e-45f2-a269-a73e14c550cf/b0cbc7df-2761-4b74-8ca2-ee311fd57bd3.meta.new
(hash=hdd2-replicate-0/cache=hdd2-replicate-0) =>
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/72bcb410-b20e-45f2-a269-a73e14c550cf/b0cbc7df-2761-4b74-8ca2-ee311fd57bd3.meta
(hash=hdd2-replicate-0/cache=hdd2-replicate-0)
[2018-11-01 13:31:18.208882] I [MSGID: 109066]
[dht-rename.c:1569:dht_rename] 0-hdd2-dht: renaming
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/f43f4dbd-14ca-49fb-98da-cb32c26b05b7/16ac297a-14e1-43d3-a4d9-f2b8d183c1e1.meta.new
(hash=hdd2-replicate-0/cache=hdd2-replicate-0) =>
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/f43f4dbd-14ca-49fb-98da-cb32c26b05b7/16ac297a-14e1-43d3-a4d9-f2b8d183c1e1.meta
(hash=hdd2-replicate-0/cache=hdd2-replicate-0)
[2018-11-01 13:31:19.461991] I [MSGID: 109066]
[dht-rename.c:1569:dht_rename] 0-hdd2-dht: renaming
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/72bcb410-b20e-45f2-a269-a73e14c550cf/b0cbc7df-2761-4b74-8ca2-ee311fd57bd3.meta.new
(hash=hdd2-replicate-0/cache=hdd2-replicate-0) =>
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/72bcb410-b20e-45f2-a269-a73e14c550cf/b0cbc7df-2761-4b74-8ca2-ee311fd57bd3.meta
(hash=hdd2-replicate-0/cache=hdd2-replicate-0)
[2018-11-02 13:31:46.567693] I [MSGID: 109066]
[dht-rename.c:1569:dht_rename] 0-hdd2-dht: renaming
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/f43f4dbd-14ca-49fb-98da-cb32c26b05b7/16ac297a-14e1-43d3-a4d9-f2b8d183c1e1.meta.new
(hash=hdd2-replicate-0/cache=hdd2-replicate-0) =>
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/f43f4dbd-14ca-49fb-98da-cb32c26b05b7/16ac297a-14e1-43d3-a4d9-f2b8d183c1e1.meta
(hash=hdd2-replicate-0/cache=hdd2-replicate-0)
[2018-11-02 13:31:47.591958] I [MSGID: 109066]
[dht-rename.c:1569:dht_rename] 0-hdd2-dht: renaming
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/72bcb410-b20e-45f2-a269-a73e14c550cf/b0cbc7df-2761-4b74-8ca2-ee311fd57bd3.meta.new
(hash=hdd2-replicate-0/cache=hdd2-replicate-0) =>
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/72bcb410-b20e-45f2-a269-a73e14c550cf/b0cbc7df-2761-4b74-8ca2-ee311fd57bd3.meta
(hash=hdd2-replicate-0/cache=hdd2-replicate-0)
[2018-11-02 13:31:47.365589] I [MSGID: 109066]
[dht-rename.c:1569:dht_rename] 0-hdd2-dht: renaming
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/f43f4dbd-14ca-49fb-98da-cb32c26b05b7/16ac297a-14e1-43d3-a4d9-f2b8d183c1e1.meta.new
(hash=hdd2-replicate-0/cache=hdd2-replicate-0) =>
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/f43f4dbd-14ca-49fb-98da-cb32c26b05b7/16ac297a-14e1-43d3-a4d9-f2b8d183c1e1.meta
(hash=hdd2-replicate-0/cache=hdd2-replicate-0)
[2018-11-02 13:31:48.095968] I [MSGID: 109066]
[dht-rename.c:1569:dht_rename] 0-hdd2-dht: renaming
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/72bcb410-b20e-45f2-a269-a73e14c550cf/b0cbc7df-2761-4b74-8ca2-ee311fd57bd3.meta.new
(hash=hdd2-replicate-0/cache=hdd2-replicate-0) =>
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/72bcb410-b20e-45f2-a269-a73e14c550cf/b0cbc7df-2761-4b74-8ca2-ee311fd57bd3.meta
(hash=hdd2-replicate-0/cache=hdd2-replicate-0)
Post by Jorick Astrego
Hi Krutika,
Thanks for the info.
After a long time the preallocated disk has been created properly. It
was a 1TB disk on a hdd pool so a bit of delay was expected.
But it took a bit longer then expected. The disk had no other virtual
disks on it. Is there something I can tweak or check for this?
Regards, Jorick
Post by Krutika Dhananjay
These log messages represent a transient state and are harmless and
can be ignored. This happens when a lookup and mknod to create shards
happen in parallel.
Regarding the preallocated disk creation issue, could you check if
there are any errors/warnings in the fuse mount logs (these are named
as the hyphenated mountpoint name followed by a ".log" and are found
under /var/log/glusterfs).
-Krutika
Hi,
I have the similar issues with ovirt 4.2 on a glusterfs-3.8.15
cluster. This was a new volume and I created first a thin
provisioned disk, then I tried to create a preallocated disk but
it hangs after 4MB. The only issue I can find in the logs sofar
are the [File exists] errors with the sharding.
The message "W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125365 ->
/data/hdd2/brick1/.glusterfs/16/a1/16a18a01-4f77-4c37-923d-9f0bc59f5cc7failed 
[File exists]" repeated 2 times between [2018-10-31
10:46:33.810987] and [2018-10-31 10:46:33.810988]
[2018-10-31 10:46:33.970949] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366 ->
/data/hdd2/brick1/.glusterfs/90/85/9085ea11-4089-4d10-8848-fa2d518fd86dfailed 
[File exists]
[2018-10-31 10:46:33.970950] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366 ->
/data/hdd2/brick1/.glusterfs/90/85/9085ea11-4089-4d10-8848-fa2d518fd86dfailed 
[File exists]
[2018-10-31 10:46:35.601064] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125369 ->
/data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8failed 
[File exists]
[2018-10-31 10:46:35.601065] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125369 ->
/data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8failed 
[File exists]
[2018-10-31 10:46:36.040564] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125370 ->
/data/hdd2/brick1/.glusterfs/30/93/3093fdb6-e62c-48b8-90e7-d4d72036fb69failed 
[File exists]
[2018-10-31 10:46:36.040565] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125370 ->
/data/hdd2/brick1/.glusterfs/30/93/3093fdb6-e62c-48b8-90e7-d4d72036fb69failed 
[File exists]
[2018-10-31 10:46:36.319247] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372 ->
/data/hdd2/brick1/.glusterfs/c3/c2/c3c272f5-50af-4e82-94bb-b76eaa7a9a39failed 
[File exists]
[2018-10-31 10:46:36.319250] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372 ->
/data/hdd2/brick1/.glusterfs/c3/c2/c3c272f5-50af-4e82-94bb-b76eaa7a9a39failed 
[File exists]
[2018-10-31 10:46:36.319309] E [MSGID: 113020]
[posix.c:1407:posix_mknod] 0-hdd2-posix: setting gfid on
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372
failed
   
        -rw-rw----. 2 root root 4194304 Oct 31 11:46
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366
-rw-rw----. 2 root root 4194304 Oct 31 11:46
/data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8
Post by Jose V. Carrión
Hi,
I have a gluster 3.12.6-1 installation with 2 configured volumes.
[2018-09-30 20:36:27.348015] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-volumedisk0-posix: link
/mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 ->
/mnt/glusterfs/vol0/brick1/.glusterfs/3b/1c/3b1c5fe1-b141-4687-8eaf-2c28f9505277failed 
[File exists]
[2018-09-30 20:36:27.383957] E [MSGID: 113020]
[posix.c:3162:posix_create] 0-volumedisk0-posix: setting gfid on
/mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 failed
I can access to the
/mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 and
/mnt/glusterfs/vol0/brick1/.glusterfs/3b/1c/3b1c5fe1-b141-4687-8eaf-2c28f9505277,
both files are hard links .
What is the meaning of the error lines?
Thanks in advance.
Cheers.
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
Met vriendelijke groet, With kind regards,
Jorick Astrego
*
Netbulae Virtualization Experts *
------------------------------------------------------------------------
Staalsteden 4-3A KvK 08198180
Fax: 053 20 30 271 www.netbulae.eu <http://www.netbulae.eu>
7547 TA Enschede BTW NL821234584B01
------------------------------------------------------------------------
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
Met vriendelijke groet, With kind regards,

Jorick Astrego

Netbulae Virtualization Experts

----------------

Tel: 053 20 30 270 ***@netbulae.eu Staalsteden 4-3A KvK 08198180
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01

----------------
Krutika Dhananjay
2018-11-06 05:33:32 UTC
Permalink
The rename log messages are informational and can be ignored.

-Krutika
Post by Jorick Astrego
I see a lot of DHT warnings in
[2018-10-21 01:24:01.413126] I [glusterfsd-mgmt.c:1600:mgmt_getspec_cbk]
0-glusterfs: No change in volfile, continuing
[2018-11-01 12:48:32.537621] I [MSGID: 109066]
[dht-rename.c:1569:dht_rename] 0-hdd2-dht: renaming
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/b48f6dcc-8fbc-4eb2-bb8b-7a3e03e72899/73655cd8-adfc-404a-8ef7-7bbaee9d43d0.meta.new
(hash=hdd2-replicate-0/cache=hdd2-replicate-0) =>
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/b48f6dcc-8fbc-4eb2-bb8b-7a3e03e72899/73655cd8-adfc-404a-8ef7-7bbaee9d43d0.meta
(hash=hdd2-replicate-0/cache=<nul>)
[2018-11-01 13:31:17.726431] I [MSGID: 109066]
[dht-rename.c:1569:dht_rename] 0-hdd2-dht: renaming
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/f43f4dbd-14ca-49fb-98da-cb32c26b05b7/16ac297a-14e1-43d3-a4d9-f2b8d183c1e1.meta.new
(hash=hdd2-replicate-0/cache=hdd2-replicate-0) =>
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/f43f4dbd-14ca-49fb-98da-cb32c26b05b7/16ac297a-14e1-43d3-a4d9-f2b8d183c1e1.meta
(hash=hdd2-replicate-0/cache=hdd2-replicate-0)
[2018-11-01 13:31:18.316010] I [MSGID: 109066]
[dht-rename.c:1569:dht_rename] 0-hdd2-dht: renaming
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/72bcb410-b20e-45f2-a269-a73e14c550cf/b0cbc7df-2761-4b74-8ca2-ee311fd57bd3.meta.new
(hash=hdd2-replicate-0/cache=hdd2-replicate-0) =>
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/72bcb410-b20e-45f2-a269-a73e14c550cf/b0cbc7df-2761-4b74-8ca2-ee311fd57bd3.meta
(hash=hdd2-replicate-0/cache=hdd2-replicate-0)
[2018-11-01 13:31:18.208882] I [MSGID: 109066]
[dht-rename.c:1569:dht_rename] 0-hdd2-dht: renaming
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/f43f4dbd-14ca-49fb-98da-cb32c26b05b7/16ac297a-14e1-43d3-a4d9-f2b8d183c1e1.meta.new
(hash=hdd2-replicate-0/cache=hdd2-replicate-0) =>
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/f43f4dbd-14ca-49fb-98da-cb32c26b05b7/16ac297a-14e1-43d3-a4d9-f2b8d183c1e1.meta
(hash=hdd2-replicate-0/cache=hdd2-replicate-0)
[2018-11-01 13:31:19.461991] I [MSGID: 109066]
[dht-rename.c:1569:dht_rename] 0-hdd2-dht: renaming
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/72bcb410-b20e-45f2-a269-a73e14c550cf/b0cbc7df-2761-4b74-8ca2-ee311fd57bd3.meta.new
(hash=hdd2-replicate-0/cache=hdd2-replicate-0) =>
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/72bcb410-b20e-45f2-a269-a73e14c550cf/b0cbc7df-2761-4b74-8ca2-ee311fd57bd3.meta
(hash=hdd2-replicate-0/cache=hdd2-replicate-0)
[2018-11-02 13:31:46.567693] I [MSGID: 109066]
[dht-rename.c:1569:dht_rename] 0-hdd2-dht: renaming
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/f43f4dbd-14ca-49fb-98da-cb32c26b05b7/16ac297a-14e1-43d3-a4d9-f2b8d183c1e1.meta.new
(hash=hdd2-replicate-0/cache=hdd2-replicate-0) =>
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/f43f4dbd-14ca-49fb-98da-cb32c26b05b7/16ac297a-14e1-43d3-a4d9-f2b8d183c1e1.meta
(hash=hdd2-replicate-0/cache=hdd2-replicate-0)
[2018-11-02 13:31:47.591958] I [MSGID: 109066]
[dht-rename.c:1569:dht_rename] 0-hdd2-dht: renaming
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/72bcb410-b20e-45f2-a269-a73e14c550cf/b0cbc7df-2761-4b74-8ca2-ee311fd57bd3.meta.new
(hash=hdd2-replicate-0/cache=hdd2-replicate-0) =>
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/72bcb410-b20e-45f2-a269-a73e14c550cf/b0cbc7df-2761-4b74-8ca2-ee311fd57bd3.meta
(hash=hdd2-replicate-0/cache=hdd2-replicate-0)
[2018-11-02 13:31:47.365589] I [MSGID: 109066]
[dht-rename.c:1569:dht_rename] 0-hdd2-dht: renaming
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/f43f4dbd-14ca-49fb-98da-cb32c26b05b7/16ac297a-14e1-43d3-a4d9-f2b8d183c1e1.meta.new
(hash=hdd2-replicate-0/cache=hdd2-replicate-0) =>
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/f43f4dbd-14ca-49fb-98da-cb32c26b05b7/16ac297a-14e1-43d3-a4d9-f2b8d183c1e1.meta
(hash=hdd2-replicate-0/cache=hdd2-replicate-0)
[2018-11-02 13:31:48.095968] I [MSGID: 109066]
[dht-rename.c:1569:dht_rename] 0-hdd2-dht: renaming
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/72bcb410-b20e-45f2-a269-a73e14c550cf/b0cbc7df-2761-4b74-8ca2-ee311fd57bd3.meta.new
(hash=hdd2-replicate-0/cache=hdd2-replicate-0) =>
/5a75dd72-2ab6-457d-b357-6d14b3bc2c3e/images/72bcb410-b20e-45f2-a269-a73e14c550cf/b0cbc7df-2761-4b74-8ca2-ee311fd57bd3.meta
(hash=hdd2-replicate-0/cache=hdd2-replicate-0)
Hi Krutika,
Thanks for the info.
After a long time the preallocated disk has been created properly. It was
a 1TB disk on a hdd pool so a bit of delay was expected.
But it took a bit longer then expected. The disk had no other virtual
disks on it. Is there something I can tweak or check for this?
Regards, Jorick
These log messages represent a transient state and are harmless and can be
ignored. This happens when a lookup and mknod to create shards happen in
parallel.
Regarding the preallocated disk creation issue, could you check if there
are any errors/warnings in the fuse mount logs (these are named as the
hyphenated mountpoint name followed by a ".log" and are found under
/var/log/glusterfs).
-Krutika
Post by Jorick Astrego
Hi,
I have the similar issues with ovirt 4.2 on a glusterfs-3.8.15 cluster.
This was a new volume and I created first a thin provisioned disk, then I
tried to create a preallocated disk but it hangs after 4MB. The only issue
I can find in the logs sofar are the [File exists] errors with the sharding.
The message "W [MSGID: 113096] [posix-handle.c:761:posix_handle_hard]
0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125365 ->
/data/hdd2/brick1/.glusterfs/16/a1/16a18a01-4f77-4c37-923d-9f0bc59f5cc7failed
[File exists]" repeated 2 times between [2018-10-31 10:46:33.810987] and
[2018-10-31 10:46:33.810988]
[2018-10-31 10:46:33.970949] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366 ->
/data/hdd2/brick1/.glusterfs/90/85/9085ea11-4089-4d10-8848-fa2d518fd86dfailed
[File exists]
[2018-10-31 10:46:33.970950] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366 ->
/data/hdd2/brick1/.glusterfs/90/85/9085ea11-4089-4d10-8848-fa2d518fd86dfailed
[File exists]
[2018-10-31 10:46:35.601064] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125369 ->
/data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8failed
[File exists]
[2018-10-31 10:46:35.601065] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125369 ->
/data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8failed
[File exists]
[2018-10-31 10:46:36.040564] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125370 ->
/data/hdd2/brick1/.glusterfs/30/93/3093fdb6-e62c-48b8-90e7-d4d72036fb69failed
[File exists]
[2018-10-31 10:46:36.040565] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125370 ->
/data/hdd2/brick1/.glusterfs/30/93/3093fdb6-e62c-48b8-90e7-d4d72036fb69failed
[File exists]
[2018-10-31 10:46:36.319247] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372 ->
/data/hdd2/brick1/.glusterfs/c3/c2/c3c272f5-50af-4e82-94bb-b76eaa7a9a39failed
[File exists]
[2018-10-31 10:46:36.319250] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372 ->
/data/hdd2/brick1/.glusterfs/c3/c2/c3c272f5-50af-4e82-94bb-b76eaa7a9a39failed
[File exists]
[2018-10-31 10:46:36.319309] E [MSGID: 113020] [posix.c:1407:posix_mknod]
0-hdd2-posix: setting gfid on
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372 failed
-rw-rw----. 2 root root 4194304 Oct 31 11:46
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366
-rw-rw----. 2 root root 4194304 Oct 31 11:46
/data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8
Hi,
I have a gluster 3.12.6-1 installation with 2 configured volumes.
[2018-09-30 20:36:27.348015] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-volumedisk0-posix: link
/mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 ->
/mnt/glusterfs/vol0/brick1/.glusterfs/3b/1c/3b1c5fe1-b141-4687-8eaf-2c28f9505277failed
[File exists]
[2018-09-30 20:36:27.383957] E [MSGID: 113020]
[posix.c:3162:posix_create] 0-volumedisk0-posix: setting gfid on
/mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 failed
I can access to the /mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5
and
/mnt/glusterfs/vol0/brick1/.glusterfs/3b/1c/3b1c5fe1-b141-4687-8eaf-2c28f9505277,
both files are hard links .
What is the meaning of the error lines?
Thanks in advance.
Cheers.
_______________________________________________
Met vriendelijke groet, With kind regards,
Jorick Astrego
* Netbulae Virtualization Experts *
------------------------------
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
------------------------------
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
Met vriendelijke groet, With kind regards,
Jorick Astrego
*Netbulae Virtualization Experts *
------------------------------
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
------------------------------
Krutika Dhananjay
2018-11-06 05:33:00 UTC
Permalink
I think this is because the way preallocation works is by sending lot of
writes.
In the newer version of ovirt, this is changed to use fallocate for faster
allocation.

Adding Sahina, Gobinda to help with the ovirt version number that has this
fix.

-Krutika
Post by Jorick Astrego
Hi Krutika,
Thanks for the info.
After a long time the preallocated disk has been created properly. It was
a 1TB disk on a hdd pool so a bit of delay was expected.
But it took a bit longer then expected. The disk had no other virtual
disks on it. Is there something I can tweak or check for this?
Regards, Jorick
These log messages represent a transient state and are harmless and can be
ignored. This happens when a lookup and mknod to create shards happen in
parallel.
Regarding the preallocated disk creation issue, could you check if there
are any errors/warnings in the fuse mount logs (these are named as the
hyphenated mountpoint name followed by a ".log" and are found under
/var/log/glusterfs).
-Krutika
Post by Jorick Astrego
Hi,
I have the similar issues with ovirt 4.2 on a glusterfs-3.8.15 cluster.
This was a new volume and I created first a thin provisioned disk, then I
tried to create a preallocated disk but it hangs after 4MB. The only issue
I can find in the logs sofar are the [File exists] errors with the sharding.
The message "W [MSGID: 113096] [posix-handle.c:761:posix_handle_hard]
0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125365 ->
/data/hdd2/brick1/.glusterfs/16/a1/16a18a01-4f77-4c37-923d-9f0bc59f5cc7failed
[File exists]" repeated 2 times between [2018-10-31 10:46:33.810987] and
[2018-10-31 10:46:33.810988]
[2018-10-31 10:46:33.970949] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366 ->
/data/hdd2/brick1/.glusterfs/90/85/9085ea11-4089-4d10-8848-fa2d518fd86dfailed
[File exists]
[2018-10-31 10:46:33.970950] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366 ->
/data/hdd2/brick1/.glusterfs/90/85/9085ea11-4089-4d10-8848-fa2d518fd86dfailed
[File exists]
[2018-10-31 10:46:35.601064] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125369 ->
/data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8failed
[File exists]
[2018-10-31 10:46:35.601065] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125369 ->
/data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8failed
[File exists]
[2018-10-31 10:46:36.040564] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125370 ->
/data/hdd2/brick1/.glusterfs/30/93/3093fdb6-e62c-48b8-90e7-d4d72036fb69failed
[File exists]
[2018-10-31 10:46:36.040565] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125370 ->
/data/hdd2/brick1/.glusterfs/30/93/3093fdb6-e62c-48b8-90e7-d4d72036fb69failed
[File exists]
[2018-10-31 10:46:36.319247] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372 ->
/data/hdd2/brick1/.glusterfs/c3/c2/c3c272f5-50af-4e82-94bb-b76eaa7a9a39failed
[File exists]
[2018-10-31 10:46:36.319250] W [MSGID: 113096]
[posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372 ->
/data/hdd2/brick1/.glusterfs/c3/c2/c3c272f5-50af-4e82-94bb-b76eaa7a9a39failed
[File exists]
[2018-10-31 10:46:36.319309] E [MSGID: 113020] [posix.c:1407:posix_mknod]
0-hdd2-posix: setting gfid on
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372 failed
-rw-rw----. 2 root root 4194304 Oct 31 11:46
/data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366
-rw-rw----. 2 root root 4194304 Oct 31 11:46
/data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8
Hi,
I have a gluster 3.12.6-1 installation with 2 configured volumes.
[2018-09-30 20:36:27.348015] W [MSGID: 113096]
[posix-handle.c:770:posix_handle_hard] 0-volumedisk0-posix: link
/mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 ->
/mnt/glusterfs/vol0/brick1/.glusterfs/3b/1c/3b1c5fe1-b141-4687-8eaf-2c28f9505277failed
[File exists]
[2018-09-30 20:36:27.383957] E [MSGID: 113020]
[posix.c:3162:posix_create] 0-volumedisk0-posix: setting gfid on
/mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 failed
I can access to the /mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5
and
/mnt/glusterfs/vol0/brick1/.glusterfs/3b/1c/3b1c5fe1-b141-4687-8eaf-2c28f9505277,
both files are hard links .
What is the meaning of the error lines?
Thanks in advance.
Cheers.
_______________________________________________
Met vriendelijke groet, With kind regards,
Jorick Astrego
* Netbulae Virtualization Experts *
------------------------------
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
------------------------------
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
Met vriendelijke groet, With kind regards,
Jorick Astrego
*Netbulae Virtualization Experts *
------------------------------
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
------------------------------
Loading...