On 30/10/2020 05:20, Ing. Luis Felipe Domínguez Vega wrote:
Great and thanks, i fixed all unknowns with the
command, now left the
incomplete, down, etc.
Start with a query:
$ ceph pg <pgid> query
That will tell you why it's down and incomplete.
The force-create-pg has probably corrupted and destroyed data in your
cluster.
PGs should recover themselves if all OSDs are back. If not then
something is very wrong and you need to find the root-cause.
PGs not becoming clean is only the result of a underlying problem.
Wido
> El 2020-10-29 23:57, 胡 玮文 escribió:
>> Hi,
>>
>> I have not tried, but maybe this will help with the unknown PGs, if
>> you don’t care any data loss.
>>
>>
>> ceph osd force-create-pg <pgid>
>>
>>
>> 在 2020年10月30日,10:46,Ing. Luis Felipe Domínguez Vega
>> <luis.dominguez(a)desoft.cu> 写道:
>>
>> Hi:
>>
>> I have this ceph status:
>> -----------------------------------------------------------------------------
>>
>> cluster:
>> id: 039bf268-b5a6-11e9-bbb7-d06726ca4a78
>> health: HEALTH_WARN
>> noout flag(s) set
>> 1 osds down
>> Reduced data availability: 191 pgs inactive, 2 pgs down, 35
>> pgs incomplete, 290 pgs stale
>> 5 pgs not deep-scrubbed in time
>> 7 pgs not scrubbed in time
>> 327 slow ops, oldest one blocked for 233398 sec, daemons
>> [osd.12,osd.36,osd.5] have slow ops.
>>
>> services:
>> mon: 1 daemons, quorum fond-beagle (age 23h)
>> mgr: fond-beagle(active, since 7h)
>> osd: 48 osds: 45 up (since 95s), 46 in (since 8h); 4 remapped pgs
>> flags noout
>>
>> data:
>> pools: 7 pools, 2305 pgs
>> objects: 350.37k objects, 1.5 TiB
>> usage: 3.0 TiB used, 38 TiB / 41 TiB avail
>> pgs: 6.681% pgs unknown
>> 1.605% pgs not active
>> 1835 active+clean
>> 279 stale+active+clean
>> 154 unknown
>> 22 incomplete
>> 10 stale+incomplete
>> 2 down
>> 2 remapped+incomplete
>> 1 stale+remapped+incomplete
>>
--------------------------------------------------------------------------------------------
>>
>>
>> How can i fix all of unknown, incomplete, remmaped+incomplete, etc...
>> i dont care if i need remove PGs
>> _______________________________________________
>> ceph-users mailing list -- ceph-users(a)ceph.io
>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>> _______________________________________________
>> ceph-users mailing list -- ceph-users(a)ceph.io
>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io