Avati,
Do you know how long that will take to have that packaged into the
glusterfs-fuse3.4.1-x.rpm on gluster.org/downloads for rhel6
Thanks,
Khoi
Message: 8
Date: Thu, 12 Dec 2013 13:38:18 -0800
From: Anand Avati <***@gluster.org>
To: Maik Kulbe <***@linux-web-development.de>
Cc: "gluster-***@gluster.org" <gluster-***@gluster.org>
Subject: Re: [Gluster-users] Structure needs cleaning on some files
Message-ID:
<CAFboF2zNOFFbuM9_ayrw6Wv+DdXVOr+D=9Az0cxNx+***@mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"
Looks like your issue was fixed by patch http://review.gluster.org/4989/
in
master branch. Backporting this to release-3.4 now.
Thanks!
Avati
From: gluster-users-***@gluster.org
To: gluster-***@gluster.org
Date: 12/13/2013 05:58 AM
Subject: Gluster-users Digest, Vol 68, Issue 14
Sent by: gluster-users-***@gluster.org
Send Gluster-users mailing list submissions to
gluster-***@gluster.org
To subscribe or unsubscribe via the World Wide Web, visit
http://supercolony.gluster.org/mailman/listinfo/gluster-users
or, via email, send a message with subject or body 'help' to
gluster-users-***@gluster.org
You can reach the person managing the list at
gluster-users-***@gluster.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Gluster-users digest..."
Today's Topics:
1. Re: Structure needs cleaning on some files (Johan Huysmans)
2. Re: Structure needs cleaning on some files (Johan Huysmans)
3. Re: Gluster Community Weekly Meeting (Vijay Bellur)
4. Re: Gluster Community Weekly Meeting (James)
5. Re: Gluster Community Weekly Meeting (Vijay Bellur)
6. Re: Structure needs cleaning on some files (Maik Kulbe)
7. Re: Structure needs cleaning on some files (Anand Avati)
8. Re: Structure needs cleaning on some files (Anand Avati)
9. Gerrit doesn't use HTTPS (James)
10. gluster fails under heavy array job load load (harry mangalam)
11. qemu remote insecure connections (Joe Topjian)
12. Documentation hackathon for 3.5 (Vijay Bellur)
13. Re: gluster fails under heavy array job load load (Anand Avati)
14. Re: Gluster Community Weekly Meeting (Niels de Vos)
----------------------------------------------------------------------
Message: 1
Date: Thu, 12 Dec 2013 14:40:37 +0100
From: Johan Huysmans <***@inuits.be>
To: "gluster-***@gluster.org" <gluster-***@gluster.org>
Subject: Re: [Gluster-users] Structure needs cleaning on some files
Message-ID: <***@inuits.be>
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
I created a bug for this issue:
https://bugzilla.redhat.com/show_bug.cgi?id=1041109
gr.
Johan
Post by Johan HuysmansHi All,
It seems I can easily reproduce the problem.
* on node 1 create a file (touch , cat , ...).
* on node 2 take md5sum of direct file (md5sum /path/to/file)
* on node 1 move file to other name (mv file file1)
* on node 2 take md5sum of direct file (md5sum /path/to/file), this is
still working although the file is not really there
* on node 1 change file content
* on node 2 take md5sum of direct file (md5sum /path/to/file), this is
still working and has a changed md5sum
This is really strange behaviour.
Is this normal, can this be altered with a a setting?
Thanks for any info,
gr.
Johan
Post by Johan HuysmansI could reproduce this problem with while my mount point is running
in debug mode.
logfile is attached.
gr.
Johan Huysmans
Post by Johan HuysmansHi All,
md5sum: /path/to/file.xml: Structure needs cleaning
[2013-12-10 08:07:32.256910] W
remote operation failed: No such file or directory
[2013-12-10 08:07:32.257436] W
remote operation failed: No such file or directory
[2013-12-10 08:07:32.259356] W [fuse-bridge.c:705:fuse_attr_cbk]
0-glusterfs-fuse: 8230: STAT() /path/to/file.xml => -1 (Structure
needs cleaning)
We are using gluster 3.4.1-3 on CentOS6.
Our servers are 64-bit, our clients 32-bit (we are already using
--enable-ino32 on the mountpoint)
Volume Name: testvolume
Type: Replicate
Volume ID: ca9c2f87-5d5b-4439-ac32-b7c138916df7
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: SRV-1:/gluster/brick1
Brick2: SRV-2:/gluster/brick2
performance.force-readdirp: on
performance.stat-prefetch: off
network.ping-timeout: 5
We have 2 client nodes who both have a fuse.glusterfs mountpoint.
On 1 client node we have a application which writes files.
On the other client node we have a application which reads these
files.
Post by Johan HuysmansPost by Johan HuysmansPost by Johan HuysmansOn the node where the files are written we don't see any problem,
and can read that file without problems.
On the other node we have problems (error messages above) reading
that file.
The problem occurs when we perform a md5sum on the exact file, when
perform a md5sum on all files in that directory there is no problem.
How can we solve this problem as this is annoying.
The problem occurs after some time (can be days), an umount and
mount of the mountpoint solves it for some days.
Once it occurs (and we don't remount) it occurs every time.
I hope someone can help me with this problems.
Thanks,
Johan Huysmans
_______________________________________________
Gluster-users mailing list
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
http://supercolony.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131212/4152cb13/attachment-0001.html
------------------------------
Message: 2
Date: Thu, 12 Dec 2013 14:51:35 +0100
From: Johan Huysmans <***@inuits.be>
To: "gluster-***@gluster.org" <gluster-***@gluster.org>
Subject: Re: [Gluster-users] Structure needs cleaning on some files
Message-ID: <***@inuits.be>
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
I created a bug for this issue:
https://bugzilla.redhat.com/show_bug.cgi?id=1041109
gr.
Johan
Post by Johan HuysmansHi All,
It seems I can easily reproduce the problem.
* on node 1 create a file (touch , cat , ...).
* on node 2 take md5sum of direct file (md5sum /path/to/file)
* on node 1 move file to other name (mv file file1)
* on node 2 take md5sum of direct file (md5sum /path/to/file), this is
still working although the file is not really there
* on node 1 change file content
* on node 2 take md5sum of direct file (md5sum /path/to/file), this is
still working and has a changed md5sum
This is really strange behaviour.
Is this normal, can this be altered with a a setting?
Thanks for any info,
gr.
Johan
Post by Johan HuysmansI could reproduce this problem with while my mount point is running
in debug mode.
logfile is attached.
gr.
Johan Huysmans
Post by Johan HuysmansHi All,
md5sum: /path/to/file.xml: Structure needs cleaning
[2013-12-10 08:07:32.256910] W
remote operation failed: No such file or directory
[2013-12-10 08:07:32.257436] W
remote operation failed: No such file or directory
[2013-12-10 08:07:32.259356] W [fuse-bridge.c:705:fuse_attr_cbk]
0-glusterfs-fuse: 8230: STAT() /path/to/file.xml => -1 (Structure
needs cleaning)
We are using gluster 3.4.1-3 on CentOS6.
Our servers are 64-bit, our clients 32-bit (we are already using
--enable-ino32 on the mountpoint)
Volume Name: testvolume
Type: Replicate
Volume ID: ca9c2f87-5d5b-4439-ac32-b7c138916df7
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: SRV-1:/gluster/brick1
Brick2: SRV-2:/gluster/brick2
performance.force-readdirp: on
performance.stat-prefetch: off
network.ping-timeout: 5
We have 2 client nodes who both have a fuse.glusterfs mountpoint.
On 1 client node we have a application which writes files.
On the other client node we have a application which reads these
files.
Post by Johan HuysmansPost by Johan HuysmansPost by Johan HuysmansOn the node where the files are written we don't see any problem,
and can read that file without problems.
On the other node we have problems (error messages above) reading
that file.
The problem occurs when we perform a md5sum on the exact file, when
perform a md5sum on all files in that directory there is no problem.
How can we solve this problem as this is annoying.
The problem occurs after some time (can be days), an umount and
mount of the mountpoint solves it for some days.
Once it occurs (and we don't remount) it occurs every time.
I hope someone can help me with this problems.
Thanks,
Johan Huysmans
_______________________________________________
Gluster-users mailing list
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
http://supercolony.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131212/97a7843c/attachment-0001.html
------------------------------
Message: 3
Date: Fri, 13 Dec 2013 00:13:30 +0530
From: Vijay Bellur <***@redhat.com>
To: James <***@gmail.com>
Cc: "gluster-***@gluster.org" <gluster-***@gluster.org>, Gluster
Devel <gluster-***@nongnu.org>, Niels de Vos
<***@redhat.com>
Subject: Re: [Gluster-users] Gluster Community Weekly Meeting
Message-ID: <***@redhat.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
No problem. It would be really good to have everybody in the meeting,
but if you cannot comments are definitely welcome :).
Post by Johan Huysmans1) About the pre-packaged VM comment's. I've gotten Vagrant working on
Fedora. I'm using this to rapidly spin up and test GlusterFS.
https://ttboj.wordpress.com/2013/12/09/vagrant-on-fedora-with-libvirt/
In the coming week or so, I'll be publishing the Vagrant file for my
GlusterFS setup, but if you really want it now I can send you an early
version. This obviously integrates with Puppet-Gluster, but whether
you use that or not is optional. I think this is the best way to test
GlusterFS. If someone gives me hosting, I could publish "pre-built"
images very easily. Let me know what you think.
Niels - do you have any thoughts here?
Post by Johan Huysmans2) I never heard back from any action items from 2 weeks ago. I think
someone was going to connect me with a way to get access to some VM's
for testing stuff !
I see that there is an ongoing offline thread now. I think that should
result in you getting those VMs.
Post by Johan Huysmans3) Hagarth: RE: typos, I have at least one spell check patch against
3.4.1 I sent it to list before, but someone told me to enroll in the
jenkins thing, which wasn't worth it for a small patch. Let me know if
you want it.
There are more typos now. I ran a cursory check with misspell-check [1]
and found quite a few. Having that cleaned up on master and release-3.5
would be great. Since the number is more, I am sure the patch would be
non-trivial and having that routed through gerrit would be great. If you
need a how to on getting to gerrit, it is available at [2].
Post by Johan Huysmans4a) Someone mentioned documentation. Please feel free to merge in
https://github.com/purpleidea/puppet-gluster/blob/master/DOCUMENTATION.md
Post by Johan Huysmans(markdown format). I have gone to great lengths to format this so that
it displays properly in github markdown, and standard (pandoc)
markdown. This way it works on github, and can also be rendered to a
https://github.com/purpleidea/puppet-gluster/raw/master/puppet-gluster-documentation.pdf
Post by Johan HuysmansYou can use the file as a template!
Again having this in gerrit would be useful for merging the puppet
documentation.
Post by Johan Huysmans4b) I think the documentation should be kept in the same repo as
GlusterFS. This way, when you submit a feature branch, it can also
come with documentation. Lots of people work this way. It helps you
get minimal docs there, and/or at least some example code or a few
sentences. Also, looking at the docs, you can see what commits came
with this
I am with you on this one. After we are done with the planned
documentation hackathon, let us open a new thread on this to get more
opinions.
-Vijay
[1] https://github.com/lyda/misspell-check
[2]
http://www.gluster.org/community/documentation/index.php/Development_Work_Flow
------------------------------
Message: 4
Date: Thu, 12 Dec 2013 13:48:31 -0500
From: James <***@gmail.com>
To: Vijay Bellur <***@redhat.com>
Cc: "gluster-***@gluster.org" <gluster-***@gluster.org>, Gluster
Devel <gluster-***@nongnu.org>
Subject: Re: [Gluster-users] Gluster Community Weekly Meeting
Message-ID:
<CADCaTgqJrJ6uTyGiti+q0SpXxMjE+m-***@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Post by Johan HuysmansPost by Johan Huysmans4a) Someone mentioned documentation. Please feel free to merge in
https://github.com/purpleidea/puppet-gluster/blob/master/DOCUMENTATION.md
Post by Johan HuysmansPost by Johan Huysmans(markdown format). I have gone to great lengths to format this so that
it displays properly in github markdown, and standard (pandoc)
markdown. This way it works on github, and can also be rendered to a
https://github.com/purpleidea/puppet-gluster/raw/master/puppet-gluster-documentation.pdf
Post by Johan HuysmansPost by Johan HuysmansYou can use the file as a template!
Again having this in gerrit would be useful for merging the puppet
documentation.
Okay, I'll try to look into Gerrit and maybe submit a fake patch for
testing.
When and where (in the tree) would be a good time to submit a doc
patch? It's probably best to wait until after your docs hackathon,
right?
------------------------------
Message: 5
Date: Fri, 13 Dec 2013 00:30:09 +0530
From: Vijay Bellur <***@redhat.com>
To: James <***@gmail.com>
Cc: "gluster-***@gluster.org" <gluster-***@gluster.org>, Gluster
Devel <gluster-***@nongnu.org>
Subject: Re: [Gluster-users] Gluster Community Weekly Meeting
Message-ID: <***@redhat.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
https://github.com/purpleidea/puppet-gluster/blob/master/DOCUMENTATION.md
Post by Johan HuysmansPost by Johan HuysmansPost by Johan Huysmans(markdown format). I have gone to great lengths to format this so that
it displays properly in github markdown, and standard (pandoc)
markdown. This way it works on github, and can also be rendered to a
https://github.com/purpleidea/puppet-gluster/raw/master/puppet-gluster-documentation.pdf
Post by Johan HuysmansPost by Johan HuysmansPost by Johan HuysmansYou can use the file as a template!
Again having this in gerrit would be useful for merging the puppet
documentation.
Okay, I'll try to look into Gerrit and maybe submit a fake patch for
testing.
Post by Johan HuysmansWhen and where (in the tree) would be a good time to submit a doc
patch? It's probably best to wait until after your docs hackathon,
right?
Just added a page in preparation for the documentation hackathon:
http://www.gluster.org/community/documentation/index.php/Submitting_Documentation_Patches
I think the puppet guide can be under a new hierarchy located at
doc/deploy-guide/markdown/en-US/. You can certainly submit the puppet
doc patch as part of the hackathon.
-Vijay
------------------------------
Message: 6
Date: Thu, 12 Dec 2013 21:46:12 +0100
From: "Maik Kulbe" <***@linux-web-development.de>
To: "Johan Huysmans" <***@inuits.be>,
"gluster-***@gluster.org" <gluster-***@gluster.org>
Subject: Re: [Gluster-users] Structure needs cleaning on some files
Message-ID:
<***@linux-web-development.de>
Content-Type: text/plain; charset="utf-8"; Format="flowed"
How do you mount your Client? FUSE? I had similar problems when playing
around with the timeout options for the FUSE mount. If they are too high
they cache the metadata for too long. When you move the file the inode
should stay the same and on the second node the path should stay in cache
for a while so it still knows the inode for that moved files old path thus
can act on the file without knowing it's path.
The problems kick in when you delete a file and recreate it - the cache
tries to access the old inode, which was deleted, thus throwing errors. If
I recall correctly the "structure needs cleaning" is one of two error
messages I got, depending on which of the timeout mount options was set to
a higher value.
-----Original Mail-----
From: Johan Huysmans [***@inuits.be]
Sent: 12.12.13 - 14:51:35
To: gluster-***@gluster.org [gluster-***@gluster.org]
Subject: Re: [Gluster-users] Structure needs cleaning on some files
Post by Johan Huysmanshttps://bugzilla.redhat.com/show_bug.cgi?id=1041109
gr.
Johan
Hi All,
It seems I can easily reproduce the problem.
* on node 1 create a file (touch , cat , ...).
* on node 2 take md5sum of direct file (md5sum /path/to/file)
* on node 1 move file to other name (mv file file1)
* on node 2 take md5sum of direct file (md5sum /path/to/file), this is
still working although the file is not really there
* on node 1 change file content
* on node 2 take md5sum of direct file (md5sum /path/to/file), this is
still working and has a changed md5sum
This is really strange behaviour.
Is this normal, can this be altered with a a setting?
Thanks for any info,
gr.
Johan
I could reproduce this problem with while my mount point is running in
debug mode.
logfile is attached.
gr.
Johan Huysmans
Hi All,
md5sum: /path/to/file.xml: Structure needs cleaning
[2013-12-10 08:07:32.256910] W
remote operation failed: No such file or directory
[2013-12-10 08:07:32.257436] W
remote operation failed: No such file or directory
[2013-12-10 08:07:32.259356] W [fuse-bridge.c:705:fuse_attr_cbk]
0-glusterfs-fuse: 8230: STAT() /path/to/file.xml => -1 (Structure
needs cleaning)
We are using gluster 3.4.1-3 on CentOS6.
Our servers are 64-bit, our clients 32-bit (we are already using
--enable-ino32 on the mountpoint)
Volume Name: testvolume
Type: Replicate
Volume ID: ca9c2f87-5d5b-4439-ac32-b7c138916df7
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: SRV-1:/gluster/brick1
Brick2: SRV-2:/gluster/brick2
performance.force-readdirp: on
performance.stat-prefetch: off
network.ping-timeout: 5
We have 2 client nodes who both have a fuse.glusterfs mountpoint.
On 1 client node we have a application which writes files.
On the other client node we have a application which reads these
files.
On the node where the files are written we don't see any problem,
and can read that file without problems.
On the other node we have problems (error messages above) reading
that file.
The problem occurs when we perform a md5sum on the exact file, when
perform a md5sum on all files in that directory there is no problem.
How can we solve this problem as this is annoying.
The problem occurs after some time (can be days), an umount and
mount of the mountpoint solves it for some days.
Once it occurs (and we don't remount) it occurs every time.
I hope someone can help me with this problems.
Thanks,
Johan Huysmans
_______________________________________________
Gluster-users mailing list
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
http://supercolony.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 2332 bytes
Desc: not available
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131212/99d59ead/attachment-0001.bin
------------------------------
Message: 7
Date: Thu, 12 Dec 2013 13:26:56 -0800
From: Anand Avati <***@gluster.org>
To: Maik Kulbe <***@linux-web-development.de>
Cc: "gluster-***@gluster.org" <gluster-***@gluster.org>
Subject: Re: [Gluster-users] Structure needs cleaning on some files
Message-ID:
<CAFboF2x1CraXbYSokGt1jhOhBCny+9LRPzASt-***@mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"
I have the same question. Do you have excessively high --entry-timeout
parameter to your FUSE mount? In any case, "Structure needs cleaning"
error
should not surface up to FUSE and that is still a bug.
On Thu, Dec 12, 2013 at 12:46 PM, Maik Kulbe
Post by Johan HuysmansHow do you mount your Client? FUSE? I had similar problems when playing
around with the timeout options for the FUSE mount. If they are too high
they cache the metadata for too long. When you move the file the inode
should stay the same and on the second node the path should stay in
cache
Post by Johan Huysmansfor a while so it still knows the inode for that moved files old path
thus
Post by Johan Huysmanscan act on the file without knowing it's path.
The problems kick in when you delete a file and recreate it - the cache
tries to access the old inode, which was deleted, thus throwing errors.
If
Post by Johan HuysmansI recall correctly the "structure needs cleaning" is one of two error
messages I got, depending on which of the timeout mount options was set
to
Post by Johan Huysmansa higher value.
-----Original Mail-----
Sent: 12.12.13 - 14:51:35
Subject: Re: [Gluster-users] Structure needs cleaning on some files
Post by Johan Huysmanshttps://bugzilla.redhat.com/show_bug.cgi?id=1041109
gr.
Johan
Hi All,
It seems I can easily reproduce the problem.
* on node 1 create a file (touch , cat , ...).
* on node 2 take md5sum of direct file (md5sum /path/to/file)
* on node 1 move file to other name (mv file file1)
* on node 2 take md5sum of direct file (md5sum /path/to/file), this is
still working although the file is not really there
* on node 1 change file content
* on node 2 take md5sum of direct file (md5sum /path/to/file), this is
still working and has a changed md5sum
This is really strange behaviour.
Is this normal, can this be altered with a a setting?
Thanks for any info,
gr.
Johan
I could reproduce this problem with while my mount point is running in
debug mode.
logfile is attached.
gr.
Johan Huysmans
Hi All,
md5sum: /path/to/file.xml: Structure needs cleaning
[2013-12-10 08:07:32.256910] W
remote operation failed: No such file or directory
[2013-12-10 08:07:32.257436] W
remote operation failed: No such file or directory
[2013-12-10 08:07:32.259356] W [fuse-bridge.c:705:fuse_attr_cbk]
0-glusterfs-fuse: 8230: STAT() /path/to/file.xml => -1 (Structure
needs cleaning)
We are using gluster 3.4.1-3 on CentOS6.
Our servers are 64-bit, our clients 32-bit (we are already using
--enable-ino32 on the mountpoint)
Volume Name: testvolume
Type: Replicate
Volume ID: ca9c2f87-5d5b-4439-ac32-b7c138916df7
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: SRV-1:/gluster/brick1
Brick2: SRV-2:/gluster/brick2
performance.force-readdirp: on
performance.stat-prefetch: off
network.ping-timeout: 5
We have 2 client nodes who both have a fuse.glusterfs mountpoint.
On 1 client node we have a application which writes files.
On the other client node we have a application which reads these
files.
On the node where the files are written we don't see any problem,
and can read that file without problems.
On the other node we have problems (error messages above) reading
that file.
The problem occurs when we perform a md5sum on the exact file, when
perform a md5sum on all files in that directory there is no problem.
How can we solve this problem as this is annoying.
The problem occurs after some time (can be days), an umount and
mount of the mountpoint solves it for some days.
Once it occurs (and we don't remount) it occurs every time.
I hope someone can help me with this problems.
Thanks,
Johan Huysmans
_______________________________________________
Gluster-users mailing list
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
http://supercolony.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131212/730b11a2/attachment-0001.html
------------------------------
Message: 8
Date: Thu, 12 Dec 2013 13:38:18 -0800
From: Anand Avati <***@gluster.org>
To: Maik Kulbe <***@linux-web-development.de>
Cc: "gluster-***@gluster.org" <gluster-***@gluster.org>
Subject: Re: [Gluster-users] Structure needs cleaning on some files
Message-ID:
<CAFboF2zNOFFbuM9_ayrw6Wv+DdXVOr+D=9Az0cxNx+***@mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"
Looks like your issue was fixed by patch http://review.gluster.org/4989/
in
master branch. Backporting this to release-3.4 now.
Thanks!
Avati
Post by Johan HuysmansI have the same question. Do you have excessively high --entry-timeout
parameter to your FUSE mount? In any case, "Structure needs cleaning"
error
Post by Johan Huysmansshould not surface up to FUSE and that is still a bug.
On Thu, Dec 12, 2013 at 12:46 PM, Maik Kulbe <
Post by Johan HuysmansHow do you mount your Client? FUSE? I had similar problems when playing
around with the timeout options for the FUSE mount. If they are too
high
Post by Johan HuysmansPost by Johan Huysmansthey cache the metadata for too long. When you move the file the inode
should stay the same and on the second node the path should stay in
cache
Post by Johan HuysmansPost by Johan Huysmansfor a while so it still knows the inode for that moved files old path
thus
Post by Johan HuysmansPost by Johan Huysmanscan act on the file without knowing it's path.
The problems kick in when you delete a file and recreate it - the cache
tries to access the old inode, which was deleted, thus throwing errors.
If
Post by Johan HuysmansPost by Johan HuysmansI recall correctly the "structure needs cleaning" is one of two error
messages I got, depending on which of the timeout mount options was set
to
Post by Johan HuysmansPost by Johan Huysmansa higher value.
-----Original Mail-----
Sent: 12.12.13 - 14:51:35
Subject: Re: [Gluster-users] Structure needs cleaning on some files
Post by Johan Huysmanshttps://bugzilla.redhat.com/show_bug.cgi?id=1041109
gr.
Johan
Hi All,
It seems I can easily reproduce the problem.
* on node 1 create a file (touch , cat , ...).
* on node 2 take md5sum of direct file (md5sum /path/to/file)
* on node 1 move file to other name (mv file file1)
* on node 2 take md5sum of direct file (md5sum /path/to/file), this is
still working although the file is not really there
* on node 1 change file content
* on node 2 take md5sum of direct file (md5sum /path/to/file), this is
still working and has a changed md5sum
This is really strange behaviour.
Is this normal, can this be altered with a a setting?
Thanks for any info,
gr.
Johan
I could reproduce this problem with while my mount point is running in
debug mode.
logfile is attached.
gr.
Johan Huysmans
Hi All,
md5sum: /path/to/file.xml: Structure needs cleaning
[2013-12-10 08:07:32.256910] W
remote operation failed: No such file or directory
[2013-12-10 08:07:32.257436] W
remote operation failed: No such file or directory
[2013-12-10 08:07:32.259356] W [fuse-bridge.c:705:fuse_attr_cbk]
0-glusterfs-fuse: 8230: STAT() /path/to/file.xml => -1 (Structure
needs cleaning)
We are using gluster 3.4.1-3 on CentOS6.
Our servers are 64-bit, our clients 32-bit (we are already using
--enable-ino32 on the mountpoint)
Volume Name: testvolume
Type: Replicate
Volume ID: ca9c2f87-5d5b-4439-ac32-b7c138916df7
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: SRV-1:/gluster/brick1
Brick2: SRV-2:/gluster/brick2
performance.force-readdirp: on
performance.stat-prefetch: off
network.ping-timeout: 5
We have 2 client nodes who both have a fuse.glusterfs mountpoint.
On 1 client node we have a application which writes files.
On the other client node we have a application which reads these
files.
On the node where the files are written we don't see any problem,
and can read that file without problems.
On the other node we have problems (error messages above) reading
that file.
The problem occurs when we perform a md5sum on the exact file, when
perform a md5sum on all files in that directory there is no problem.
How can we solve this problem as this is annoying.
The problem occurs after some time (can be days), an umount and
mount of the mountpoint solves it for some days.
Once it occurs (and we don't remount) it occurs every time.
I hope someone can help me with this problems.
Thanks,
Johan Huysmans
_______________________________________________
Gluster-users mailing list
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
http://supercolony.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131212/f3934ac1/attachment-0001.html
------------------------------
Message: 9
Date: Thu, 12 Dec 2013 17:35:02 -0500
From: James <***@gmail.com>
To: "gluster-***@gluster.org" <gluster-***@gluster.org>, Gluster
Devel <gluster-***@nongnu.org>
Subject: [Gluster-users] Gerrit doesn't use HTTPS
Message-ID:
<CADCaTgrmcoJuNL4=***@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
I just noticed that the Gluster Gerrit [1] doesn't use HTTPS!
Can this be fixed ASAP?
Cheers,
James
[1] http://review.gluster.org/
------------------------------
Message: 10
Date: Thu, 12 Dec 2013 17:03:12 -0800
From: harry mangalam <***@uci.edu>
To: "gluster-***@gluster.org" <gluster-***@gluster.org>
Subject: [Gluster-users] gluster fails under heavy array job load load
Message-ID: <***@stunted>
Content-Type: text/plain; charset="us-ascii"
Hi All,
(Gluster Volume Details at bottom)
I've posted some of this previously, but even after various upgrades,
attempted fixes, etc, it remains a problem.
Short version: Our gluster fs (~340TB) provides scratch space for a
~5000core
academic compute cluster.
Much of our load is streaming IO, doing a lot of genomics work, and that
is
the load under which we saw this latest failure.
Under heavy batch load, especially array jobs, where there might be
several
64core nodes doing I/O on the 4servers/8bricks, we often get job failures
that
have the following profile:
Client POV:
Here is a sampling of the client logs (/var/log/glusterfs/gl.log) for all
compute nodes that indicated interaction with the user's files
<http://pastie.org/8548781>
Here are some client Info logs that seem fairly serious:
<http://pastie.org/8548785>
The errors that referenced this user were gathered from all the nodes that
were running his code (in compute*) and agglomerated with:
cut -f2,3 -d']' compute* |cut -f1 -dP | sort | uniq -c | sort -gr
and placed here to show the profile of errors that his run generated.
<http://pastie.org/8548796>
so 71 of them were:
W [client-rpc-fops.c:2624:client3_3_lookup_cbk] 0-gl-client-7: remote
operation failed: Transport endpoint is not connected.
etc
We've seen this before and previously discounted it bc it seems to have
been
related to the problem of spurious NFS-related bugs, but now I'm wondering
whether it's a real problem.
Also the 'remote operation failed: Stale file handle. ' warnings.
There were no Errors logged per se, tho some of the W's looked fairly
nasty,
like the 'dht_layout_dir_mismatch'