Dear everyone,
during a very risky process or rebuilding my OSDs to relocate their
WAL&DBs onto NVMe SSD partitions, I created a situation where 2 of 3
OSDs of jerasure 2+1 pool had become destroyed, without waiting for
complete rebuild of their respective PGs on newly-created OSDs.
And now I have an incomplete PG, looking like this :
[WRN] PG_AVAILABILITY: Reduced data availability: 1 pg inactive, 1 pg
incomplete
pg 1.90 is incomplete, acting [2,3,5] (reducing pool jerasure21
min_size from 2 may help; search ceph.com/docs for 'incomplete')
PG OBJECTS DEGRADED MISPLACED UNFOUND BYTES OMAP_BYTES*
OMAP_KEYS* LOG STATE SINCE VERSION REPORTED UP ACTING
SCRUB_STAMP DEEP_SCRUB_STAMP
1.90 0 0 0 0 0 0 0 0
incomplete 47m 0'0 223998:104494 [2,3,5]p2 [2,3,5]p2
2020-04-01T13:22:08.983764+0300 2020-03-29T18:36:21.037198+0300
This corrupted some (or maybe all) of my iSCSI RBD targets, and now I
would like to determine what of them will have to be dropped & recreated
so as not to refer to these incomplete objects. Is there a known recipe
or scenario to do that ? Also I would like to know if anything special
needs to be done to the incomplete PG to make it complete and usable again.
Thanks in advance for your help.
Hi
An RGW access denied problem that I can't get anywhere with...
* Bucket mybucket owned by user "c"
* Bucket policy grants s3:listBucket on mybucket, and s3:putObject &
s3:deleteObject on mybucket/* to user "j", and s3:getObject to * (I
even granted s3:* on mybucket/* to "j" with no effect)
* User "j" can create objects in mybucket, and can delete individual
objects (using DELETE)
* User "j" get 403 when trying to do a multi-object-delete (POST
/mybucket/?delete with a list of 4 object keys)
Code is a Java servlet running in Wildfly, loading its credentials from
the default ~/.aws/credentials file. It enables path-style access. If I
change the credentials in there to those of the bucket owner "c" it works...
What's different about permissioning for multi-object-delete?
Log file shows access has been granted, but further down there is a
suspicious "Permissions for user not found" (don't know if that is
expected or not).
Thanks, Chris
-------
Extract from RGW log with debugging at level 20:
2020-07-11T17:55:54.038+0100 7f45adad7700 20 req 15 0.004000002s
s3:multi_object_delete rgw::auth::s3::LocalEngine granted access
2020-07-11T17:55:54.038+0100 7f45adad7700 20 req 15 0.004000002s
s3:multi_object_delete rgw::auth::s3::AWSAuthStrategy granted access
2020-07-11T17:55:54.038+0100 7f45adad7700 2 req 15 0.004000002s
s3:multi_object_delete normalizing buckets and tenants
2020-07-11T17:55:54.038+0100 7f45adad7700 10 s->object=<NULL>
s->bucket=mybucket
2020-07-11T17:55:54.038+0100 7f45adad7700 2 req 15 0.004000002s
s3:multi_object_delete init permissions
2020-07-11T17:55:54.038+0100 7f45adad7700 20 get_system_obj_state:
rctx=0x7f45adacc288 obj=default.rgw.meta:root:mybucket
state=0x5628b912e9a0 s->prefetch_data=0
2020-07-11T17:55:54.038+0100 7f45adad7700 10 cache get:
name=default.rgw.meta+root+mybucket : hit (requested=0x16, cached=0x17)
2020-07-11T17:55:54.038+0100 7f45adad7700 20 get_system_obj_state:
s->obj_tag was set empty
2020-07-11T17:55:54.038+0100 7f45adad7700 10 cache get:
name=default.rgw.meta+root+mybucket : hit (requested=0x11, cached=0x17)
2020-07-11T17:55:54.038+0100 7f45adad7700 15 decode_policy Read
AccessControlPolicy<AccessControlPolicy
xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>c</ID><DisplayName>C</DisplayName></Owner><AccessControlList><Grant><Grantee
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:type="CanonicalUser"><ID>c</ID><DisplayName>C</DisplayName></Grantee><Permission>FULL_CONTROL</Permission></Grant></AccessControlList></AccessControlPolicy>
2020-07-11T17:55:54.038+0100 7f45adad7700 20 get_system_obj_state:
rctx=0x7f45adacc668 obj=default.rgw.meta:users.uid:j
state=0x5628b912e9a0 s->prefetch_data=0
2020-07-11T17:55:54.038+0100 7f45adad7700 10 cache get:
name=default.rgw.meta+users.uid+j : hit (requested=0x6, cached=0x17)
2020-07-11T17:55:54.038+0100 7f45adad7700 20 get_system_obj_state:
s->obj_tag was set empty
2020-07-11T17:55:54.038+0100 7f45adad7700 20 Read xattr: user.rgw.idtag
2020-07-11T17:55:54.038+0100 7f45adad7700 10 cache get:
name=default.rgw.meta+users.uid+j : hit (requested=0x3, cached=0x17)
2020-07-11T17:55:54.038+0100 7f45adad7700 2 req 15 0.004000002s
s3:multi_object_delete recalculating target
2020-07-11T17:55:54.038+0100 7f45adad7700 2 req 15 0.004000002s
s3:multi_object_delete reading permissions
2020-07-11T17:55:54.038+0100 7f45adad7700 2 req 15 0.004000002s
s3:multi_object_delete init op
2020-07-11T17:55:54.038+0100 7f45adad7700 2 req 15 0.004000002s
s3:multi_object_delete verifying op mask
2020-07-11T17:55:54.038+0100 7f45adad7700 20 req 15 0.004000002s
s3:multi_object_delete required_mask= 4 user.op_mask=7
2020-07-11T17:55:54.038+0100 7f45adad7700 2 req 15 0.004000002s
s3:multi_object_delete verifying op permissions
2020-07-11T17:55:54.038+0100 7f45adad7700 20 req 15 0.004000002s
s3:multi_object_delete -- Getting permissions begin with perm_mask=50
2020-07-11T17:55:54.038+0100 7f45adad7700 5 req 15 0.004000002s
s3:multi_object_delete Searching permissions for
identity=rgw::auth::SysReqApplier ->
rgw::auth::LocalApplier(acct_user=j, acct_name=J, subuser=,
perm_mask=15, is_admin=0) mask=50
2020-07-11T17:55:54.038+0100 7f45adad7700 5 Searching permissions for uid=j
2020-07-11T17:55:54.038+0100 7f45adad7700 5 Permissions for user not found
2020-07-11T17:55:54.038+0100 7f45adad7700 5 Searching permissions for
group=1 mask=50
2020-07-11T17:55:54.038+0100 7f45adad7700 5 Permissions for group not found
2020-07-11T17:55:54.038+0100 7f45adad7700 5 Searching permissions for
group=2 mask=50
2020-07-11T17:55:54.038+0100 7f45adad7700 5 Permissions for group not found
2020-07-11T17:55:54.038+0100 7f45adad7700 5 req 15 0.004000002s
s3:multi_object_delete -- Getting permissions done for
identity=rgw::auth::SysReqApplier ->
rgw::auth::LocalApplier(acct_user=j, acct_name=J, subuser=,
perm_mask=15, is_admin=0), owner=c, perm=0
2020-07-11T17:55:54.038+0100 7f45adad7700 10 req 15 0.004000002s
s3:multi_object_delete identity=rgw::auth::SysReqApplier ->
rgw::auth::LocalApplier(acct_user=j, acct_name=J, subuser=,
perm_mask=15, is_admin=0) requested perm (type)=2, policy perm=0,
user_perm_mask=2, acl perm=0
2020-07-11T17:55:54.038+0100 7f45adad7700 1 op->ERRORHANDLER:
err_no=-13 new_err_no=-13
2020-07-11T17:55:54.038+0100 7f45adad7700 2 req 15 0.004000002s
s3:multi_object_delete op status=0
2020-07-11T17:55:54.038+0100 7f45adad7700 2 req 15 0.004000002s
s3:multi_object_delete http status=403
2020-07-11T17:55:54.038+0100 7f45adad7700 1 ====== req done
req=0x7f45adaced50 op status=0 http_status=403 latency=0.004000002s ======
2020-07-11T17:55:54.038+0100 7f45adad7700 20 process_request() returned -13
2020-07-11T17:55:54.038+0100 7f45adad7700 1 civetweb: 0x5628b9424000:
192.168.80.135 - - [11/Jul/2020:17:55:54 +0100] "POST /mybucket/?delete
HTTP/1.1" 403 464 - aws-sdk-java/1.11.820 Linux/5.7.7-200.fc32.x86_64
OpenJDK_64-Bit_Server_VM/14.0.1+7 java/14.0.1 vendor/Red_Hat,_Inc.
Hi,
I have executed the ceph -n osd.0 --show-config command but replied with
that error message.
[errno 2] RADOS object not found (error connecting to the cluster)
Could someone prompt me to the right direction what could be the problem?
Thanks. Regards.
ceph version 15.2.4
I copied the client.admin key to the hosts and here is my ceph.conf file.
# minimal ceph.conf for 4372945a-b43d-11ea-b1b7-49709def22d4
[global]
fsid = 4372945a-b43d-11ea-b1b7-49709def22d4
mon_host = 192.168.1.10,192.168.1.11,192.168.1.12
mon_initial_members = 192.168.1.10,192.168.1.11
public network = 192.168.1.0/24
cluster network = 10.10.10.0/24
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
It does appear that long file names and filestore seem to be a real problem. We have a cluster where 99% of the objects have names longer than N (220+?) characters such that it truncates the file name (as seen below with "_<sha-sum>_0_long") and stores the full object name in xattrs for the object. During boot the OSD goes out to lunch for increasing amounts of time based on the number of objects on disk you have that meet this criteria (With 2.4 million ish objects that meet this criteria, the OSD takes over an hour to boot). I plan on testing this same scenario with BlueStore to see if it's also susceptible to these boot / read issues.
Eric
-----Original Message-----
From: Eric Smith <Eric.Smith(a)vecima.com>
Sent: Friday, July 10, 2020 1:46 PM
To: ceph-users(a)ceph.io
Subject: [ceph-users] Re: Luminous 12.2.12 - filestore OSDs take an hour to boot
For what it's worth - all of our objects are generating LONG named object files like so...
\uABCD\ucontent.\srecording\swzdchd\u\utnda-trg-1008007-wzdchd-216203706303281120-230932949-1593482400-159348660000000001\swzdchd\u\utpc2-tp1-1008007-wzdchd-216203706303281120-230932949-1593482400-159348660000000001\u\uwzdchd3._0bfd7c716b839cb7b3ad_0_long
Does this matter? AFAICT it sees this as a long file name and has to lookup the object name in the xattrs ? Is that bad?
-----Original Message-----
From: Eric Smith <Eric.Smith(a)vecima.com>
Sent: Friday, July 10, 2020 6:59 AM
To: ceph-users(a)ceph.io
Subject: [ceph-users] Luminous 12.2.12 - filestore OSDs take an hour to boot
I have a cluster running Luminous 12.2.12 with Filestore and it takes my OSDs somewhere around an hour to start (They do start successfully - eventually). I have the following log entries that seem to show the OSD process attempting to descend into the PG directory on disk and create an object list of some sort:
2020-07-09 18:29:28.017207 7f3b680afd80 20 osd.1 137390 clearing temps in 8.14ads3_head pgid 8.14ads3
2020-07-09 18:29:28.017211 7f3b680afd80 20 filestore(/var/lib/ceph/osd/ceph-1) collection_list(5012): pool is 8 shard is 3 pgid 8.14ads3
2020-07-09 18:29:28.017213 7f3b680afd80 10 filestore(/var/lib/ceph/osd/ceph-1) collection_list(5020): first checking temp pool
2020-07-09 18:29:28.017215 7f3b680afd80 20 filestore(/var/lib/ceph/osd/ceph-1) collection_list(5012): pool is -10 shard is 3 pgid 8.14ads3
2020-07-09 18:29:28.017221 7f3b680afd80 20 _collection_list_partial start:GHMIN end:GHMAX-64 ls.size 0
2020-07-09 18:29:28.017263 7f3b680afd80 20 filestore(/var/lib/ceph/osd/ceph-1) objects: []
2020-07-09 18:29:28.017268 7f3b680afd80 10 filestore(/var/lib/ceph/osd/ceph-1) collection_list(5028): fall through to non-temp collection, start 3#-1:00000000::::0#
2020-07-09 18:29:28.017272 7f3b680afd80 20 _collection_list_partial start:3#-1:00000000::::0# end:GHMAX-64 ls.size 0
2020-07-09 18:29:28.038124 7f3b680afd80 20 list_by_hash_bitwise prefix D
2020-07-09 18:29:28.058679 7f3b680afd80 20 list_by_hash_bitwise prefix DA
2020-07-09 18:29:28.069432 7f3b680afd80 20 list_by_hash_bitwise prefix DA4
2020-07-09 18:29:29.789598 7f3b51a87700 20 filestore(/var/lib/ceph/osd/ceph-1) sync_entry(4010): woke after 5.000074
2020-07-09 18:29:29.789634 7f3b51a87700 10 journal commit_start max_applied_seq 53085082, open_ops 0
2020-07-09 18:29:29.789639 7f3b51a87700 10 journal commit_start blocked, all open_ops have completed
2020-07-09 18:29:29.789641 7f3b51a87700 10 journal commit_start nothing to do
2020-07-09 18:29:29.789663 7f3b51a87700 20 filestore(/var/lib/ceph/osd/ceph-1) sync_entry(3994): waiting for max_interval 5.000000
2020-07-09 18:29:34.789815 7f3b51a87700 20 filestore(/var/lib/ceph/osd/ceph-1) sync_entry(4010): woke after 5.000109
2020-07-09 18:29:34.789898 7f3b51a87700 10 journal commit_start max_applied_seq 53085082, open_ops 0
2020-07-09 18:29:34.789902 7f3b51a87700 10 journal commit_start blocked, all open_ops have completed
2020-07-09 18:29:34.789906 7f3b51a87700 10 journal commit_start nothing to do
2020-07-09 18:29:34.789939 7f3b51a87700 20 filestore(/var/lib/ceph/osd/ceph-1) sync_entry(3994): waiting for max_interval 5.000000
2020-07-09 18:29:38.651689 7f3b680afd80 20 list_by_hash_bitwise prefix DA41
2020-07-09 18:29:39.790069 7f3b51a87700 20 filestore(/var/lib/ceph/osd/ceph-1) sync_entry(4010): woke after 5.000128
2020-07-09 18:29:39.790090 7f3b51a87700 10 journal commit_start max_applied_seq 53085082, open_ops 0
2020-07-09 18:29:39.790092 7f3b51a87700 10 journal commit_start blocked, all open_ops have completed
2020-07-09 18:29:39.790093 7f3b51a87700 10 journal commit_start nothing to do
2020-07-09 18:29:39.790102 7f3b51a87700 20 filestore(/var/lib/ceph/osd/ceph-1) sync_entry(3994): waiting for max_interval 5.000000
2020-07-09 18:29:44.790200 7f3b51a87700 20 filestore(/var/lib/ceph/osd/ceph-1) sync_entry(4010): woke after 5.000095
2020-07-09 18:29:44.790256 7f3b51a87700 10 journal commit_start max_applied_seq 53085082, open_ops 0
2020-07-09 18:29:44.790265 7f3b51a87700 10 journal commit_start blocked, all open_ops have completed
2020-07-09 18:29:44.790268 7f3b51a87700 10 journal commit_start nothing to do
2020-07-09 18:29:44.790286 7f3b51a87700 20 filestore(/var/lib/ceph/osd/ceph-1) sync_entry(3994): waiting for max_interval 5.000000
2020-07-09 18:29:49.790353 7f3b51a87700 20 filestore(/var/lib/ceph/osd/ceph-1) sync_entry(4010): woke after 5.000066
2020-07-09 18:29:49.790374 7f3b51a87700 10 journal commit_start max_applied_seq 53085082, open_ops 0
2020-07-09 18:29:49.790376 7f3b51a87700 10 journal commit_start blocked, all open_ops have completed
2020-07-09 18:29:49.790378 7f3b51a87700 10 journal commit_start nothing to do
2020-07-09 18:29:49.790387 7f3b51a87700 20 filestore(/var/lib/ceph/osd/ceph-1) sync_entry(3994): waiting for max_interval 5.000000
2020-07-09 18:29:50.564479 7f3b680afd80 20 list_by_hash_bitwise prefix DA410000
2020-07-09 18:29:50.564501 7f3b680afd80 20 list_by_hash_bitwise prefix DA410000 ob 3#8:b5280000::::head#
2020-07-09 18:29:50.564508 7f3b680afd80 20 list_by_hash_bitwise prefix DA41002A
Any idea what's going on here? I can run a find of every file on the filesystem in under 12 minutes so I'm not sure what's taking so long.
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an email to ceph-users-leave(a)ceph.io _______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an email to ceph-users-leave(a)ceph.io
Hi,
Using the repo suggested for Ubuntu 18 (
https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/sta…
) podman 2.0.2~1 is installed. However, when attempting to use cephadm to
bootstrap a cluster, we see an error when attempting to start the mon
container:
"Error: invalid config provided: AppArmorProfile and privileged are
mutually exclusive options"
From the bit of reading we've done, this looks to be an issue with Podman
v2 compatibility, and it appears to break with Ceph 15.2.4.
Has anybody else run into this/been able to workaround it? We'll have to
downgrade podman, but unfortunately, that repo does not keep previous
versions.
Thanks!
Apologies if this is a resend, as my external email address changed and I had to update my account.
Hello all, strange question: I am working with several RBD export-diffs and need them in some other format, usable by something else, such as just a "RAW" image. The obvious answer to do that is rbd import-diff the files and then rbd export to a file, but is there a way to get there directly without requiring a supplementary Ceph cluster?
Thanks,
--
Kenneth Van Alstyne
Systems Architect
M: 228.547.8045
15052 Conference Center Dr, Chantilly, VA 20151
perspecta
For what it's worth - all of our objects are generating LONG named object files like so...
\uABCD\ucontent.\srecording\swzdchd\u\utnda-trg-1008007-wzdchd-216203706303281120-230932949-1593482400-159348660000000001\swzdchd\u\utpc2-tp1-1008007-wzdchd-216203706303281120-230932949-1593482400-159348660000000001\u\uwzdchd3._0bfd7c716b839cb7b3ad_0_long
Does this matter? AFAICT it sees this as a long file name and has to lookup the object name in the xattrs ? Is that bad?
-----Original Message-----
From: Eric Smith <Eric.Smith(a)vecima.com>
Sent: Friday, July 10, 2020 6:59 AM
To: ceph-users(a)ceph.io
Subject: [ceph-users] Luminous 12.2.12 - filestore OSDs take an hour to boot
I have a cluster running Luminous 12.2.12 with Filestore and it takes my OSDs somewhere around an hour to start (They do start successfully - eventually). I have the following log entries that seem to show the OSD process attempting to descend into the PG directory on disk and create an object list of some sort:
2020-07-09 18:29:28.017207 7f3b680afd80 20 osd.1 137390 clearing temps in 8.14ads3_head pgid 8.14ads3
2020-07-09 18:29:28.017211 7f3b680afd80 20 filestore(/var/lib/ceph/osd/ceph-1) collection_list(5012): pool is 8 shard is 3 pgid 8.14ads3
2020-07-09 18:29:28.017213 7f3b680afd80 10 filestore(/var/lib/ceph/osd/ceph-1) collection_list(5020): first checking temp pool
2020-07-09 18:29:28.017215 7f3b680afd80 20 filestore(/var/lib/ceph/osd/ceph-1) collection_list(5012): pool is -10 shard is 3 pgid 8.14ads3
2020-07-09 18:29:28.017221 7f3b680afd80 20 _collection_list_partial start:GHMIN end:GHMAX-64 ls.size 0
2020-07-09 18:29:28.017263 7f3b680afd80 20 filestore(/var/lib/ceph/osd/ceph-1) objects: []
2020-07-09 18:29:28.017268 7f3b680afd80 10 filestore(/var/lib/ceph/osd/ceph-1) collection_list(5028): fall through to non-temp collection, start 3#-1:00000000::::0#
2020-07-09 18:29:28.017272 7f3b680afd80 20 _collection_list_partial start:3#-1:00000000::::0# end:GHMAX-64 ls.size 0
2020-07-09 18:29:28.038124 7f3b680afd80 20 list_by_hash_bitwise prefix D
2020-07-09 18:29:28.058679 7f3b680afd80 20 list_by_hash_bitwise prefix DA
2020-07-09 18:29:28.069432 7f3b680afd80 20 list_by_hash_bitwise prefix DA4
2020-07-09 18:29:29.789598 7f3b51a87700 20 filestore(/var/lib/ceph/osd/ceph-1) sync_entry(4010): woke after 5.000074
2020-07-09 18:29:29.789634 7f3b51a87700 10 journal commit_start max_applied_seq 53085082, open_ops 0
2020-07-09 18:29:29.789639 7f3b51a87700 10 journal commit_start blocked, all open_ops have completed
2020-07-09 18:29:29.789641 7f3b51a87700 10 journal commit_start nothing to do
2020-07-09 18:29:29.789663 7f3b51a87700 20 filestore(/var/lib/ceph/osd/ceph-1) sync_entry(3994): waiting for max_interval 5.000000
2020-07-09 18:29:34.789815 7f3b51a87700 20 filestore(/var/lib/ceph/osd/ceph-1) sync_entry(4010): woke after 5.000109
2020-07-09 18:29:34.789898 7f3b51a87700 10 journal commit_start max_applied_seq 53085082, open_ops 0
2020-07-09 18:29:34.789902 7f3b51a87700 10 journal commit_start blocked, all open_ops have completed
2020-07-09 18:29:34.789906 7f3b51a87700 10 journal commit_start nothing to do
2020-07-09 18:29:34.789939 7f3b51a87700 20 filestore(/var/lib/ceph/osd/ceph-1) sync_entry(3994): waiting for max_interval 5.000000
2020-07-09 18:29:38.651689 7f3b680afd80 20 list_by_hash_bitwise prefix DA41
2020-07-09 18:29:39.790069 7f3b51a87700 20 filestore(/var/lib/ceph/osd/ceph-1) sync_entry(4010): woke after 5.000128
2020-07-09 18:29:39.790090 7f3b51a87700 10 journal commit_start max_applied_seq 53085082, open_ops 0
2020-07-09 18:29:39.790092 7f3b51a87700 10 journal commit_start blocked, all open_ops have completed
2020-07-09 18:29:39.790093 7f3b51a87700 10 journal commit_start nothing to do
2020-07-09 18:29:39.790102 7f3b51a87700 20 filestore(/var/lib/ceph/osd/ceph-1) sync_entry(3994): waiting for max_interval 5.000000
2020-07-09 18:29:44.790200 7f3b51a87700 20 filestore(/var/lib/ceph/osd/ceph-1) sync_entry(4010): woke after 5.000095
2020-07-09 18:29:44.790256 7f3b51a87700 10 journal commit_start max_applied_seq 53085082, open_ops 0
2020-07-09 18:29:44.790265 7f3b51a87700 10 journal commit_start blocked, all open_ops have completed
2020-07-09 18:29:44.790268 7f3b51a87700 10 journal commit_start nothing to do
2020-07-09 18:29:44.790286 7f3b51a87700 20 filestore(/var/lib/ceph/osd/ceph-1) sync_entry(3994): waiting for max_interval 5.000000
2020-07-09 18:29:49.790353 7f3b51a87700 20 filestore(/var/lib/ceph/osd/ceph-1) sync_entry(4010): woke after 5.000066
2020-07-09 18:29:49.790374 7f3b51a87700 10 journal commit_start max_applied_seq 53085082, open_ops 0
2020-07-09 18:29:49.790376 7f3b51a87700 10 journal commit_start blocked, all open_ops have completed
2020-07-09 18:29:49.790378 7f3b51a87700 10 journal commit_start nothing to do
2020-07-09 18:29:49.790387 7f3b51a87700 20 filestore(/var/lib/ceph/osd/ceph-1) sync_entry(3994): waiting for max_interval 5.000000
2020-07-09 18:29:50.564479 7f3b680afd80 20 list_by_hash_bitwise prefix DA410000
2020-07-09 18:29:50.564501 7f3b680afd80 20 list_by_hash_bitwise prefix DA410000 ob 3#8:b5280000::::head#
2020-07-09 18:29:50.564508 7f3b680afd80 20 list_by_hash_bitwise prefix DA41002A
Any idea what's going on here? I can run a find of every file on the filesystem in under 12 minutes so I'm not sure what's taking so long.
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an email to ceph-users-leave(a)ceph.io