16.2.4
for some when starting ods with systemctl on this "renewed" host, did not
start osds after a while, but when doing it through console manually, it
did.
Thank anyway.
On Thu, 27 May 2021, 16:31 Eugen Block, <eblock(a)nde.ag> wrote:
Yes, if your pool requires 5 chunks and you only have
5 hosts (with
failure domain host) your PGs become undersized when a host fails and
won't recover until the OSDs come back. Which ceph version is this?
Zitat von Rok Jaklič <rjaklic(a)gmail.com>om>:
For this pool I have set EC 3+2 (so in total I
have 5 nodes) which one
was
temporarily removed, but maybe this was the
problem?
On Thu, May 27, 2021 at 3:51 PM Rok Jaklič <rjaklic(a)gmail.com> wrote:
> Hi, thanks for quick reply
>
> root@ctplmon1:~# ceph pg dump pgs_brief | grep undersized
> dumped pgs_brief
> 9.5 active+undersized+degraded [72,85,54,120,2147483647]
> 72 [72,85,54,120,2147483647] 72
> 9.6 active+undersized+degraded [101,47,113,74,2147483647]
> 101 [101,47,113,74,2147483647] 101
> 9.2 active+undersized+degraded [86,118,74,2147483647,49]
> 86 [86,118,74,2147483647,49] 86
> 9.d active+undersized+degraded [49,136,83,90,2147483647]
> 49 [49,136,83,90,2147483647] 49
> 9.f active+undersized+degraded [55,103,81,128,2147483647]
> 55 [55,103,81,128,2147483647] 55
> 9.18 active+undersized+degraded [115,50,61,89,2147483647]
> 115 [115,50,61,89,2147483647] 115
> 9.1d active+undersized+degraded [61,90,31,2147483647,125]
> 61 [61,90,31,2147483647,125] 61
> 9.10 active+undersized+degraded [46,2147483647,71,86,122]
> 46 [46,2147483647,71,86,122] 46
> 9.17 active+undersized+degraded [60,95,114,2147483647,48]
> 60 [60,95,114,2147483647,48] 60
> 9.15 active+undersized+degraded [121,76,30,101,2147483647]
> 121 [121,76,30,101,2147483647] 121
> root@ctplmon1:~# ceph osd tree
> ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
> -1 764.11981 root default
> -3 152.82378 host ctplosd1
> 0 hdd 5.45798 osd.0 down 0 1.00000
> 1 hdd 5.45799 osd.1 down 0 1.00000
> 2 hdd 5.45799 osd.2 down 0 1.00000
> 3 hdd 5.45799 osd.3 down 0 1.00000
> 4 hdd 5.45799 osd.4 down 0 1.00000
> 5 hdd 5.45799 osd.5 down 0 1.00000
> 6 hdd 5.45799 osd.6 down 0 1.00000
> 7 hdd 5.45799 osd.7 down 0 1.00000
> 8 hdd 5.45799 osd.8 down 0 1.00000
> 9 hdd 5.45799 osd.9 down 0 1.00000
> 10 hdd 5.45799 osd.10 down 0 1.00000
> 11 hdd 5.45799 osd.11 down 0 1.00000
> 12 hdd 5.45799 osd.12 down 0 1.00000
> 13 hdd 5.45799 osd.13 down 0 1.00000
> 14 hdd 5.45799 osd.14 down 0 1.00000
> 15 hdd 5.45799 osd.15 down 0 1.00000
> 16 hdd 5.45799 osd.16 down 0 1.00000
> 17 hdd 5.45799 osd.17 down 0 1.00000
> 18 hdd 5.45799 osd.18 down 0 1.00000
> 19 hdd 5.45799 osd.19 down 0 1.00000
> 20 hdd 5.45799 osd.20 down 0 1.00000
> 21 hdd 5.45799 osd.21 down 0 1.00000
> 22 hdd 5.45799 osd.22 down 0 1.00000
> 23 hdd 5.45799 osd.23 down 0 1.00000
> 24 hdd 5.45799 osd.24 down 0 1.00000
> 25 hdd 5.45799 osd.25 down 0 1.00000
> 26 hdd 5.45799 osd.26 down 0 1.00000
> 27 hdd 5.45799 osd.27 down 0 1.00000
> -11 152.82401 host ctplosd5
> 112 hdd 5.45799 osd.112 up 1.00000 1.00000
> 113 hdd 5.45799 osd.113 up 1.00000 1.00000
> 114 hdd 5.45799 osd.114 up 1.00000 1.00000
> 115 hdd 5.45799 osd.115 up 1.00000 1.00000
> 116 hdd 5.45799 osd.116 up 1.00000 1.00000
> 117 hdd 5.45799 osd.117 up 1.00000 1.00000
> 118 hdd 5.45799 osd.118 up 1.00000 1.00000
> 119 hdd 5.45799 osd.119 up 1.00000 1.00000
> 120 hdd 5.45799 osd.120 up 1.00000 1.00000
> 121 hdd 5.45799 osd.121 up 1.00000 1.00000
> 122 hdd 5.45799 osd.122 up 1.00000 1.00000
> 123 hdd 5.45799 osd.123 up 1.00000 1.00000
> 124 hdd 5.45799 osd.124 up 1.00000 1.00000
> 125 hdd 5.45799 osd.125 up 1.00000 1.00000
> 126 hdd 5.45799 osd.126 up 1.00000 1.00000
> 127 hdd 5.45799 osd.127 up 1.00000 1.00000
> 128 hdd 5.45799 osd.128 up 1.00000 1.00000
> 129 hdd 5.45799 osd.129 up 1.00000 1.00000
> 130 hdd 5.45799 osd.130 up 1.00000 1.00000
> 131 hdd 5.45799 osd.131 up 1.00000 1.00000
> 132 hdd 5.45799 osd.132 up 1.00000 1.00000
> 133 hdd 5.45799 osd.133 up 1.00000 1.00000
> 134 hdd 5.45799 osd.134 up 1.00000 1.00000
> 135 hdd 5.45799 osd.135 up 1.00000 1.00000
> 136 hdd 5.45799 osd.136 up 1.00000 1.00000
> 137 hdd 5.45799 osd.137 up 1.00000 1.00000
> 138 hdd 5.45799 osd.138 up 1.00000 1.00000
> 139 hdd 5.45799 osd.139 up 1.00000 1.00000
> -7 152.82401 host ctplosd6
> 57 hdd 5.45799 osd.57 up 1.00000 1.00000
> 58 hdd 5.45799 osd.58 up 1.00000 1.00000
> 59 hdd 5.45799 osd.59 up 1.00000 1.00000
> 60 hdd 5.45799 osd.60 up 1.00000 1.00000
> 61 hdd 5.45799 osd.61 up 1.00000 1.00000
> 62 hdd 5.45799 osd.62 up 1.00000 1.00000
> 63 hdd 5.45799 osd.63 up 1.00000 1.00000
> 64 hdd 5.45799 osd.64 up 1.00000 1.00000
> 65 hdd 5.45799 osd.65 up 1.00000 1.00000
> 66 hdd 5.45799 osd.66 up 1.00000 1.00000
> 67 hdd 5.45799 osd.67 up 1.00000 1.00000
> 68 hdd 5.45799 osd.68 up 1.00000 1.00000
> 69 hdd 5.45799 osd.69 up 1.00000 1.00000
> 70 hdd 5.45799 osd.70 up 1.00000 1.00000
> 71 hdd 5.45799 osd.71 up 1.00000 1.00000
> 72 hdd 5.45799 osd.72 up 1.00000 1.00000
> 73 hdd 5.45799 osd.73 up 1.00000 1.00000
> 74 hdd 5.45799 osd.74 up 1.00000 1.00000
> 75 hdd 5.45799 osd.75 up 1.00000 1.00000
> 76 hdd 5.45799 osd.76 up 1.00000 1.00000
> 77 hdd 5.45799 osd.77 up 1.00000 1.00000
> 78 hdd 5.45799 osd.78 up 1.00000 1.00000
> 79 hdd 5.45799 osd.79 up 1.00000 1.00000
> 80 hdd 5.45799 osd.80 up 1.00000 1.00000
> 81 hdd 5.45799 osd.81 up 1.00000 1.00000
> 82 hdd 5.45799 osd.82 up 1.00000 1.00000
> 83 hdd 5.45799 osd.83 up 1.00000 1.00000
> 84 hdd 5.45799 osd.84 up 1.00000 1.00000
> -5 152.82401 host ctplosd7
> 28 hdd 5.45799 osd.28 up 1.00000 1.00000
> 29 hdd 5.45799 osd.29 up 1.00000 1.00000
> 30 hdd 5.45799 osd.30 up 1.00000 1.00000
> 31 hdd 5.45799 osd.31 up 1.00000 1.00000
> 32 hdd 5.45799 osd.32 up 1.00000 1.00000
> 33 hdd 5.45799 osd.33 up 1.00000 1.00000
> 34 hdd 5.45799 osd.34 up 1.00000 1.00000
> 35 hdd 5.45799 osd.35 up 1.00000 1.00000
> 36 hdd 5.45799 osd.36 up 1.00000 1.00000
> 37 hdd 5.45799 osd.37 up 1.00000 1.00000
> 38 hdd 5.45799 osd.38 up 1.00000 1.00000
> 39 hdd 5.45799 osd.39 up 1.00000 1.00000
> 40 hdd 5.45799 osd.40 up 1.00000 1.00000
> 41 hdd 5.45799 osd.41 up 1.00000 1.00000
> 42 hdd 5.45799 osd.42 up 1.00000 1.00000
> 43 hdd 5.45799 osd.43 up 1.00000 1.00000
> 44 hdd 5.45799 osd.44 up 1.00000 1.00000
> 45 hdd 5.45799 osd.45 up 1.00000 1.00000
> 46 hdd 5.45799 osd.46 up 1.00000 1.00000
> 47 hdd 5.45799 osd.47 up 1.00000 1.00000
> 48 hdd 5.45799 osd.48 up 1.00000 1.00000
> 49 hdd 5.45799 osd.49 up 1.00000 1.00000
> 50 hdd 5.45799 osd.50 up 1.00000 1.00000
> 51 hdd 5.45799 osd.51 up 1.00000 1.00000
> 52 hdd 5.45799 osd.52 up 1.00000 1.00000
> 53 hdd 5.45799 osd.53 up 1.00000 1.00000
> 54 hdd 5.45799 osd.54 up 1.00000 1.00000
> 55 hdd 5.45799 osd.55 up 1.00000 1.00000
> -9 152.82401 host ctplosd8
> 56 hdd 5.45799 osd.56 up 1.00000 1.00000
> 85 hdd 5.45799 osd.85 up 1.00000 1.00000
> 86 hdd 5.45799 osd.86 up 1.00000 1.00000
> 87 hdd 5.45799 osd.87 up 1.00000 1.00000
> 88 hdd 5.45799 osd.88 up 1.00000 1.00000
> 89 hdd 5.45799 osd.89 up 1.00000 1.00000
> 90 hdd 5.45799 osd.90 up 1.00000 1.00000
> 91 hdd 5.45799 osd.91 up 1.00000 1.00000
> 92 hdd 5.45799 osd.92 up 1.00000 1.00000
> 93 hdd 5.45799 osd.93 up 1.00000 1.00000
> 94 hdd 5.45799 osd.94 up 1.00000 1.00000
> 95 hdd 5.45799 osd.95 up 1.00000 1.00000
> 96 hdd 5.45799 osd.96 up 1.00000 1.00000
> 97 hdd 5.45799 osd.97 up 1.00000 1.00000
> 98 hdd 5.45799 osd.98 up 1.00000 1.00000
> 99 hdd 5.45799 osd.99 up 1.00000 1.00000
> 100 hdd 5.45799 osd.100 up 1.00000 1.00000
> 101 hdd 5.45799 osd.101 up 1.00000 1.00000
> 102 hdd 5.45799 osd.102 up 1.00000 1.00000
> 103 hdd 5.45799 osd.103 up 1.00000 1.00000
> 104 hdd 5.45799 osd.104 up 1.00000 1.00000
> 105 hdd 5.45799 osd.105 up 1.00000 1.00000
> 106 hdd 5.45799 osd.106 up 1.00000 1.00000
> 107 hdd 5.45799 osd.107 up 1.00000 1.00000
> 108 hdd 5.45799 osd.108 up 1.00000 1.00000
> 109 hdd 5.45799 osd.109 up 1.00000 1.00000
> 110 hdd 5.45799 osd.110 up 1.00000 1.00000
> 111 hdd 5.45799 osd.111 up 1.00000 1.00000
> root@ctplmon1:~# ceph osd pool ls detail
> pool 9 'default.rgw.buckets.data' erasure profile ec-32-profile size 5
> min_size 4 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32
> autoscale_mode on last_change 128267 lfor 0/127784/127779 flags
> hashpspool,ec_overwrites stripe_width 12288 application rgw
>
> ----
>
> The affected pool is pool number 9 and host is ctplosd1. This is the
host
> I removed at the first place (to reinstall
OS) and now I added this host
> back to the cluster, but osds on this host cannot be brought back to up
> state for some reason, even though osd processes are running on the
host.
>
> Kind regards,
> Rok
>
>
>
>
>
> On Thu, May 27, 2021 at 3:32 PM Eugen Block <eblock(a)nde.ag> wrote:
>
>> Hi,
>>
>> this sounds like your crush rule(s) for one or more pools can't place
>> the PGs because the host is missing. Please share
>>
>> ceph pg dump pgs_brief | grep undersized
>> ceph osd tree
>> ceph osd pool ls detail
>>
>> and the crush rule(s) for the affected pool(s).
>>
>>
>> Zitat von Rok Jaklič <rjaklic(a)gmail.com>om>:
>>
>> > Hi,
>> >
>> > I have removed one node, but now ceph seems to stuck in:
>> > Degraded data redundancy: 67/2393 objects degraded (2.800%), 12 pgs
>> > degraded, 12 pgs undersized
>> >
>> > How to "force" rebalancing? Or should I just wait a little bit
more?
>> >
>> > Kind regards,
>> > rok
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users(a)ceph.io
>> > To unsubscribe send an email to ceph-users-leave(a)ceph.io
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users(a)ceph.io
>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>>
>