Discussion:
[Gluster-users] delettion of files in gluster directories
hsafe
2018-07-04 13:57:47 UTC
Permalink
Hi all,

I have a rather simplistic question, there are dirs that contain a lot
of small files in a 2x replica set accessed natively on the clients. Due
to the directory file number; it fails to show the dir contents from
clients.

In case of move or deletion of the dirs natively and from the server's
view of the dirs , how does glusterfs converge or "heal" if you can call
it the dirs as emptied or as if moved?

I am running on Glusterfs-server and Glusterfs-client version: 3.10.12.

To add more details,it is that we learned it the hard way that our app
is shipping too small files into dirs with daily accumulaiton, accesed
for serving by an nginx.

Here is a little more info:

# gluster volume info

Volume Name: gv1
Type: Replicate
Volume ID: f1c955a1-7a92-4b1b-acb5-8b72b41aaace
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: IMG-01:/images/storage/brick1
Brick2: IMG-02:/images/storage/brick1
Options Reconfigured:
nfs.disable: true
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
server.statedump-path: /tmp
performance.readdir-ahead: on
# gluster volume status
Status of volume: gv1
Gluster process                             TCP Port  RDMA Port Online  Pid
------------------------------------------------------------------------------
Brick IMG-01:/images/storage/brick1         49152     0 Y       3577
Brick IMG-02:/images/storage/brick1         49152     0 Y       21699
Self-heal Daemon on localhost               N/A       N/A Y       24813
Self-heal Daemon on IMG-01                  N/A       N/A Y       3560

Task Status of Volume gv1
Vlad Kopylov
2018-07-05 02:06:49 UTC
Permalink
If you delete those from the bricks it will start healing them - restoring
from other bricks
I have similar issue with email storage which uses maildir format with
millions of small files

doing delete on the server takes days

sometimes worth recreating volumes wiping .glusterfs on bricks, deleting
files on bricks, creating volumes again and repopulating .glusterfs by
querying attr
https://lists.gluster.org/pipermail/gluster-users/2018-July/034310.html
Post by hsafe
Hi all,
I have a rather simplistic question, there are dirs that contain a lot of
small files in a 2x replica set accessed natively on the clients. Due to
the directory file number; it fails to show the dir contents from clients.
In case of move or deletion of the dirs natively and from the server's
view of the dirs , how does glusterfs converge or "heal" if you can call it
the dirs as emptied or as if moved?
I am running on Glusterfs-server and Glusterfs-client version: 3.10.12.
To add more details,it is that we learned it the hard way that our app is
shipping too small files into dirs with daily accumulaiton, accesed for
serving by an nginx.
# gluster volume info
Volume Name: gv1
Type: Replicate
Volume ID: f1c955a1-7a92-4b1b-acb5-8b72b41aaace
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: IMG-01:/images/storage/brick1
Brick2: IMG-02:/images/storage/brick1
nfs.disable: true
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
server.statedump-path: /tmp
performance.readdir-ahead: on
# gluster volume status
Status of volume: gv1
Gluster process TCP Port RDMA Port Online
Pid
------------------------------------------------------------
------------------
Brick IMG-01:/images/storage/brick1 49152 0 Y 3577
Brick IMG-02:/images/storage/brick1 49152 0 Y 21699
Self-heal Daemon on localhost N/A N/A Y 24813
Self-heal Daemon on IMG-01 N/A N/A Y 3560
Task Status of Volume gv1
------------------------------------------------------------
------------------
There are no active volume tasks
_______________________________________________
Gluster-users mailing list
http://lists.gluster.org/mailman/listinfo/gluster-users
Loading...