Discussion:
[Gluster-users] Deleted file sometimes remains in .glusterfs/unlink
David Spisla
2018-11-19 14:48:03 UTC
Permalink
Hello Gluster Community,

sometimes it happens that a file accessed via FUSE or SMB will remain in
.glusterfs/unlink after delete it. The command 'df -hT' still prints the
volume capacity before the files was deleted. Another observation is that
after waiting a hole nigth the file is removed completely and there is the
correct capacit . Is this behaviour "works as design"?

The issue was mentioned here already:
https://lists.gluster.org/pipermail/gluster-devel/2016-July/049952.html

and there seems to be a fix . But unfortunately it still occurs and there
is only the workaround to restart the brick processes or wait for some
hours.

Regards
David Spisla
Ravishankar N
2018-11-20 06:32:55 UTC
Permalink
Post by David Spisla
Hello Gluster Community,
sometimes it happens that a file accessed via FUSE or SMB will remain
in .glusterfs/unlink after delete it. The command 'df -hT' still
prints the volume capacity before the files was deleted. Another
observation is that after waiting a hole nigth the file is removed
completely and there is the correct capacit . Is this behaviour "works
as design"?
Is this a replicate volume? Files end up in .glusterfs/unlink post
deletion only if there is still an fd open on the file. Perhaps there
was an on going data-self heal or another application had not yet closed
the file descriptor?
Which version of gluster are you using and what is the volume info?
-Ravi
Post by David Spisla
https://lists.gluster.org/pipermail/gluster-devel/2016-July/049952.html
and there seems to be a fix . But unfortunately it still occurs and
there is only the workaround to restart the brick processes or wait
for some hours.
Regards
David Spisla
_______________________________________________
Gluster-users mailing list
https://lists.gluster.org/mailman/listinfo/gluster-users
David Spisla
2018-11-20 10:03:57 UTC
Permalink
Hello Ravi,



I am using Gluster v4.1.5. I have replica 4 volume. This is the info:



Volume Name: testv1

Type: Replicate

Volume ID: a5b2d650-4e93-4334-94bb-3105acb112d1

Status: Started

Snapshot Count: 0

Number of Bricks: 1 x 4 = 4

Transport-type: tcp

Bricks:

Brick1: fs-davids-c1-n1:/gluster/brick1/glusterbrick

Brick2: fs-davids-c1-n2:/gluster/brick1/glusterbrick

Brick3: fs-davids-c1-n3:/gluster/brick1/glusterbrick

Brick4: fs-davids-c1-n4:/gluster/brick1/glusterbrick

Options Reconfigured:

performance.client-io-threads: off

nfs.disable: on

transport.address-family: inet

user.smb: disable

features.read-only: off

features.worm: off

features.worm-file-level: on

features.retention-mode: enterprise

features.default-retention-period: 120

network.ping-timeout: 10

features.cache-invalidation: on

features.cache-invalidation-timeout: 600

performance.nl-cache: on

performance.nl-cache-timeout: 600

client.event-threads: 32

server.event-threads: 32

cluster.lookup-optimize: on

performance.stat-prefetch: on

performance.cache-invalidation: on

performance.md-cache-timeout: 600

performance.cache-samba-metadata: on

performance.cache-ima-xattrs: on

performance.io-thread-count: 64

cluster.use-compound-fops: on

performance.cache-size: 512MB

performance.cache-refresh-timeout: 10

performance.read-ahead: off

performance.write-behind-window-size: 4MB

performance.write-behind: on

storage.build-pgfid: on

auth.ssl-allow: *

client.ssl: on

server.ssl: on

features.utime: on

storage.ctime: on

features.bitrot: on

features.scrub: Active

features.scrub-freq: daily

cluster.enable-shared-storage: enable



Regards

David

Am Di., 20. Nov. 2018 um 07:33 Uhr schrieb Ravishankar N <
Post by David Spisla
Hello Gluster Community,
sometimes it happens that a file accessed via FUSE or SMB will remain in
.glusterfs/unlink after delete it. The command 'df -hT' still prints the
volume capacity before the files was deleted. Another observation is that
after waiting a hole nigth the file is removed completely and there is the
correct capacit . Is this behaviour "works as design"?
Is this a replicate volume? Files end up in .glusterfs/unlink post
deletion only if there is still an fd open on the file. Perhaps there was
an on going data-self heal or another application had not yet closed the
file descriptor?
Which version of gluster are you using and what is the volume info?
-Ravi
https://lists.gluster.org/pipermail/gluster-devel/2016-July/049952.html
and there seems to be a fix . But unfortunately it still occurs and there
is only the workaround to restart the brick processes or wait for some
hours.
Regards
David Spisla
_______________________________________________
Loading...