After upgrading from 15.1.0 to 15.1.1 of Octopus im seeing this error for
dashboard:
cluster:
id: c5233cbc-e9c2-4db3-85e1-423737a95a8c
health: HEALTH_ERR
Module 'dashboard' has failed: ('pwdUpdateRequired',)
Also executing any command result in:
Error EIO: Module 'dashboard' has experienced an error and cannot handle
commands:
('pwdUpdateRequired',)
How can i fix this problem? By disabling dashboard, Ceph status is back to
OK but if i enable ceph dashboard then ERR given.
Thanks,
Gencer.
Wait, your client name is just "1"? In that case you need to specify
that in your mount command:
mount ... -o name=1,secret=...
It has to match your ceph auth settings, where "client" is only a
prefix and is followed by the client's name
[client.1]
Zitat von "Dungan, Scott A." <sdungan(a)caltech.edu>:
> Tried that:
>
> [client.1]
> key = *******************************
> caps mds = "allow rw path=/"
> caps mon = "allow r"
> caps osd = "allow rw tag cephfs pool=meta_data, allow rw pool=data"
>
> No change.
>
>
> ________________________________
> From: Yan, Zheng <ukernel(a)gmail.com>
> Sent: Sunday, March 22, 2020 9:28 PM
> To: Dungan, Scott A. <sdungan(a)caltech.edu>
> Cc: Eugen Block <eblock(a)nde.ag>; ceph-users(a)ceph.io <ceph-users(a)ceph.io>
> Subject: Re: [ceph-users] Re: Cephfs mount error 1 = Operation not permitted
>
> On Sun, Mar 22, 2020 at 8:21 AM Dungan, Scott A. <sdungan(a)caltech.edu> wrote:
>>
>> Zitat, thanks for the tips.
>>
>> I tried appending the key directly in the mount command
>> (secret=<CLIENT.1.SECRET>) and that produced the same error.
>>
>> I took a look at the thread you suggested and I ran the commands
>> that Paul at Croit suggested even though I the ceph dashboard
>> showed "cephs" as already set as the application on both my data
>> and metadata pools:
>>
>> [root@ceph-n4 ~]# ceph osd pool application set data cephfs data cephfs
>> set application 'cephfs' key 'data' to 'cephfs' on pool 'data'
>> [root@ceph-n4 ~]# ceph osd pool application set meta_data cephfs
>> metadata cephfs
>> set application 'cephfs' key 'metadata' to 'cephfs' on pool 'meta_data'
>>
>> No change. I get the "mount error 1 = Operation not permitted"
>> error the same as before.
>>
>> I also tried manually editing the caps osd pool tags for my
>> client.1, to allow rw to both the data pool as well as the metadata
>> pool, as suggested further in the thread:
>>
>> [client.1]
>> key = ***********************************
>> caps mds = "allow rw path=all"
>
>
> try replacing this with "allow rw path=/"
>
>> caps mon = "allow r"
>> caps osd = "allow rw tag cephfs pool=meta_data, allow rw pool=data"
>>
>> No change.
>>
>> ________________________________
>> From: Eugen Block <eblock(a)nde.ag>
>> Sent: Saturday, March 21, 2020 1:16 PM
>> To: ceph-users(a)ceph.io <ceph-users(a)ceph.io>
>> Subject: [ceph-users] Re: Cephfs mount error 1 = Operation not permitted
>>
>> I just remembered there was a thread [1] about that a couple of weeks
>> ago. Seems like you need to add the capabilities to the client.
>>
>> [1]
>> https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/23FDDSYBCDV…
>>
>>
>> Zitat von Eugen Block <eblock(a)nde.ag>:
>>
>> > Hi,
>> >
>> > have you tried to mount with the secret only instead of a secret file?
>> >
>> > mount -t ceph ceph-n4:6789:/ /ceph -o name=client.1,secret=<SECRET>
>> >
>> > If that works your secret file is not right. If not you should check
>> > if the client actually has access to the cephfs pools ('ceph auth
>> > list').
>> >
>> >
>> >
>> > Zitat von "Dungan, Scott A." <sdungan(a)caltech.edu>:
>> >
>> >> I am still very new to ceph and I have just set up my first small
>> >> test cluster. I have Cephfs enabled (named cephfs) and everything
>> >> is good in the dashboard. I added an authorized user key for cephfs
>> >> with:
>> >>
>> >> ceph fs authorize cephfs client.1 / r / rw
>> >>
>> >> I then copied the key to a file with:
>> >>
>> >> ceph auth get-key client.1 > /tmp/client.1.secret
>> >>
>> >> Copied the file over to the client and then attempt mount witth the
>> >> kernel driver:
>> >>
>> >> mount -t ceph ceph-n4:6789:/ /ceph -o
>> >> name=client.1,secretfile=/root/client.1.secret
>> >> mount error 1 = Operation not permitted
>> >>
>> >> I looked in the logs on the mds (which is also the mgr and mon for
>> >> the cluster) and I don't see any events logged for this. I also
>> >> tried the mount command with verbose and I didn't get any further
>> >> detail. Any tips would be most appreciated.
>> >>
>> >> --
>> >>
>> >> Scott Dungan
>> >> California Institute of Technology
>> >> Office: (626) 395-3170
>> >> sdungan(a)caltech.edu<mailto:sdungan@caltech.edu>
>> >>
>> >> _______________________________________________
>> >> ceph-users mailing list -- ceph-users(a)ceph.io
>> >> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users(a)ceph.io
>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>> _______________________________________________
>> ceph-users mailing list -- ceph-users(a)ceph.io
>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
On Sun, Mar 22, 2020 at 8:21 AM Dungan, Scott A. <sdungan(a)caltech.edu> wrote:
>
> Zitat, thanks for the tips.
>
> I tried appending the key directly in the mount command (secret=<CLIENT.1.SECRET>) and that produced the same error.
>
> I took a look at the thread you suggested and I ran the commands that Paul at Croit suggested even though I the ceph dashboard showed "cephs" as already set as the application on both my data and metadata pools:
>
> [root@ceph-n4 ~]# ceph osd pool application set data cephfs data cephfs
> set application 'cephfs' key 'data' to 'cephfs' on pool 'data'
> [root@ceph-n4 ~]# ceph osd pool application set meta_data cephfs metadata cephfs
> set application 'cephfs' key 'metadata' to 'cephfs' on pool 'meta_data'
>
> No change. I get the "mount error 1 = Operation not permitted" error the same as before.
>
> I also tried manually editing the caps osd pool tags for my client.1, to allow rw to both the data pool as well as the metadata pool, as suggested further in the thread:
>
> [client.1]
> key = ***********************************
> caps mds = "allow rw path=all"
try replacing this with "allow rw path=/"
> caps mon = "allow r"
> caps osd = "allow rw tag cephfs pool=meta_data, allow rw pool=data"
>
> No change.
>
> ________________________________
> From: Eugen Block <eblock(a)nde.ag>
> Sent: Saturday, March 21, 2020 1:16 PM
> To: ceph-users(a)ceph.io <ceph-users(a)ceph.io>
> Subject: [ceph-users] Re: Cephfs mount error 1 = Operation not permitted
>
> I just remembered there was a thread [1] about that a couple of weeks
> ago. Seems like you need to add the capabilities to the client.
>
> [1]
> https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/23FDDSYBCDV…
>
>
> Zitat von Eugen Block <eblock(a)nde.ag>:
>
> > Hi,
> >
> > have you tried to mount with the secret only instead of a secret file?
> >
> > mount -t ceph ceph-n4:6789:/ /ceph -o name=client.1,secret=<SECRET>
> >
> > If that works your secret file is not right. If not you should check
> > if the client actually has access to the cephfs pools ('ceph auth
> > list').
> >
> >
> >
> > Zitat von "Dungan, Scott A." <sdungan(a)caltech.edu>:
> >
> >> I am still very new to ceph and I have just set up my first small
> >> test cluster. I have Cephfs enabled (named cephfs) and everything
> >> is good in the dashboard. I added an authorized user key for cephfs
> >> with:
> >>
> >> ceph fs authorize cephfs client.1 / r / rw
> >>
> >> I then copied the key to a file with:
> >>
> >> ceph auth get-key client.1 > /tmp/client.1.secret
> >>
> >> Copied the file over to the client and then attempt mount witth the
> >> kernel driver:
> >>
> >> mount -t ceph ceph-n4:6789:/ /ceph -o
> >> name=client.1,secretfile=/root/client.1.secret
> >> mount error 1 = Operation not permitted
> >>
> >> I looked in the logs on the mds (which is also the mgr and mon for
> >> the cluster) and I don't see any events logged for this. I also
> >> tried the mount command with verbose and I didn't get any further
> >> detail. Any tips would be most appreciated.
> >>
> >> --
> >>
> >> Scott Dungan
> >> California Institute of Technology
> >> Office: (626) 395-3170
> >> sdungan(a)caltech.edu<mailto:sdungan@caltech.edu>
> >>
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users(a)ceph.io
> >> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
Hi everyone,
As we wrap up Octopus and kick of development for Pacific, now it seems
like a good idea to sort out what to call the Q release.
Traditionally/historically, these have always been names of cephalopod
species--usually the "common name", but occasionally a latin name
(infernalis).
Q is a bit of a challenge since there aren't many of either that start
with Q. Nick Barcet found one: quebecoceras, an extinct genus of nautilus
(https://en.wikipedia.org/wiki/Quebecoceras).
The only other Q cephalopod reference I could find was Squidward Q
Tentacles, a character (octopus, strangely) from Spongebob Squarepants,
and Yehuda figured out that the Q stands for Quincy.
So far that's it. If you can find any other options, please catalog them
on the etherpad:
https://pad.ceph.com/p/q
(or even get a head start on future releases.. they're always the
single-letter pads, e.g., https://pad.ceph.com/p/r).
sage
Dear All,
We are having problems with a critical osd crashing on a Nautilus
(14.2.8) cluster.
This is a critical failure, as the osd is part of a pg that is otherwise
"down+remapped" due to other osd's crashing; we were hoping the pg was
going to repair itself, as there are plenty of free osds, but for some
reason this pg never managed to get out of an undersized state.
The osd starts OK, runs for a few minutes, then crashes with an assert,
immediately after trying to backfill the pg that is "down+remapped"
-7> 2020-03-23 15:28:15.368 7f15aeea8700 5 osd.287 pg_epoch: 35531
pg[5.750s2( v 35398'3381328 (35288'3378238,35398'3381328]
local-lis/les=35530/35531 n=190408 ec=1821/1818 lis/c 35530/22903
les/c/f 35531/22917/0 35486/35530/35530)
[234,354,304,388,125,25,427,226,77,154]/[2147483647,2147483647,287,388,125,25,427,226,77,154]p287(2)
backfill=[234(0),304(2),354(1)] r=2 lpr=35530 pi=[22903,35530)/9 rops=1
crt=35398'3381328 lcod 0'0 mlcod 0'0
active+undersized+degraded+remapped+backfilling mbc={} trimq=112 ps=121]
backfill_pos is 5:0ae00653:::1000e49a8c6.000000d3:head
-6> 2020-03-23 15:28:15.381 7f15cc9ec700 10 monclient:
get_auth_request con 0x555b2f229800 auth_method 0
-5> 2020-03-23 15:28:15.381 7f15b86bb700 2 osd.287 35531
ms_handle_reset con 0x555b2fef7400 session 0x555b2f363600
-4> 2020-03-23 15:28:15.391 7f15c04c5700 5 prioritycache
tune_memory target: 4294967296 mapped: 805339136 unmapped: 1032192 heap:
806371328 old mem: 2845415832 new mem: 2845415832
-3> 2020-03-23 15:28:15.420 7f15cc9ec700 10 monclient:
get_auth_request con 0x555b2fef7800 auth_method 0
-2> 2020-03-23 15:28:15.420 7f15b86bb700 2 osd.287 35531
ms_handle_reset con 0x555b2fef7c00 session 0x555b2f363c00
-1> 2020-03-23 15:28:15.476 7f15aeea8700 -1
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.8/rpm/el7/BUILD/ceph-14.2.8/src/osd/osd_types.cc:
In function 'uint64_t SnapSet::get_clone_bytes(snapid_t) const' thread
7f15aeea8700 time 2020-03-23 15:28:15.470166
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.8/rpm/el7/BUILD/ceph-14.2.8/src/osd/osd_types.cc:
5443: FAILED ceph_assert(clone_size.count(clone))
osd log (127KB) is here:
<https://www.mrc-lmb.cam.ac.uk/scicomp/ceph-osd.287.log.gz>
/var/log/ceph/ceph-osd.287.log.gz
when the osd was running, the pg state is as follows
[root@ceph7 ~]# ceph pg dump | grep ^5.750
5.750 190408 0 804 190119 0
569643615603 0 0 3090 3090
active+undersized+degraded+remapped+backfill_wait 2020-03-23
14:37:57.582509 35398'3381328 35491:3265627
[234,354,304,388,125,25,427,226,77,154] 234
[NONE,NONE,287,388,125,25,427,226,77,154] 287 24471'3200829
2020-01-28 15:48:35.574934 24471'3200829 2020-01-28
15:48:35.574934 112
with the osd down:
[root@ceph7 ~]# ceph pg dump | grep ^5.750
dumped all
5.750 190408 0 0 0 0
569643615603 0 0 3090
3090 down+remapped 2020-03-23
15:28:28.345176 35398'3381328 35532:3265613
[234,354,304,388,125,25,427,226,77,154] 234
[NONE,NONE,NONE,388,125,25,427,226,77,154] 388 24471'3200829
2020-01-28 15:48:35.574934 24471'3200829 2020-01-28 15:48:35.574934
This cluster is being used to backup a live cephfs cluster and has 1.8PB
of data, including 30 days of snapshots. We are using 8+2 EC.
Any help appreciated,
Jake
Note: I am working from home until further notice.
For help, contact unixadmin(a)mrc-lmb.cam.ac.uk
--
Dr Jake Grimmett
Head Of Scientific Computing
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.
Phone 01223 267019
Mobile 0776 9886539
Hello dear cephers,
lately, there's been some discussion about slow requests hanging
in "wait for new map" status. At least in my case, it's being caused
by osdmaps not being properly trimmed. I tried all possible steps
to force osdmap pruning (restarting mons, restarting everyging,
poking crushmap), to no avail. Still all OSDs keep min osdmap version
1, while newest is 4734. Otherwise cluster is healthy, with no down
OSDs, network communication works flawlessly, all seems to be fine.
Just can't get old osdmaps to go away.. I's very small cluster and I've
moved all production traffic elsewhere, so I'm free to investigate
and debug, however I'm out of ideas on what to try or where to look.
Any ideas somebody please?
The cluster is running 13.2.8
I'd be very grateful for any tips
with best regards
nikola ciprich
--
-------------------------------------
Ing. Nikola CIPRICH
LinuxBox.cz, s.r.o.
28.rijna 168, 709 00 Ostrava
tel.: +420 591 166 214
fax: +420 596 621 273
mobil: +420 777 093 799
www.linuxbox.cz
mobil servis: +420 737 238 656
email servis: servis(a)linuxbox.cz
-------------------------------------
Hello,
my goal is to back up a proxmox cluster with rbd-mirror for desaster
recovery. Promoting/Demoting, etc.. works great.
But how can i access a single file on the mirrored cluster? I tried:
root@ceph01:~# rbd-nbd --read-only map cluster5-rbd/vm-114-disk-1
--cluster backup
/dev/nbd1
But i get:
root@ceph01:~# fdisk -l /dev/nbd1
fdisk: cannot open /dev/nbd1: Input/output error
dmesg shows stuff like:
[Thu Mar 19 09:29:55 2020] nbd1: unable to read partition table
[Thu Mar 19 09:29:55 2020] block nbd1: Other side returned error (30)
[Thu Mar 19 09:29:55 2020] block nbd1: Other side returned error (30)
Here is my state:
root@ceph01:~# rbd --cluster backup mirror pool status cluster5-rbd --verbose
health: OK
images: 3 total
3 replaying
vm-106-disk-0:
global_id: 0bc18ee1-1749-4787-a45d-01c7e946ff06
state: up+replaying
description: replaying, master_position=[object_number=3, tag_tid=2,
entry_tid=3], mirror_position=[object_number=3, tag_tid=2,
entry_tid=3], entries_behind_master=0
last_update: 2020-03-19 09:29:17
vm-114-disk-1:
global_id: 2219ffa9-a4e0-4f89-b352-ff30b1ffe9b9
state: up+replaying
description: replaying, master_position=[object_number=390,
tag_tid=6, entry_tid=334290], mirror_position=[object_number=382,
tag_tid=6, entry_tid=328526], entries_behind_master=5764
last_update: 2020-03-19 09:29:17
vm-115-disk-0:
global_id: 2b0af493-14c1-4b10-b557-84928dc37dd1
state: up+replaying
description: replaying, master_position=[object_number=72,
tag_tid=1, entry_tid=67796], mirror_position=[object_number=72,
tag_tid=1, entry_tid=67796], entries_behind_master=0
last_update: 2020-03-19 09:29:17
More dmesg stuff:
[Thu Mar 19 09:29:55 2020] block nbd1: Other side returned error (30)
[Thu Mar 19 09:29:55 2020] block nbd1: Other side returned error (30)
[Thu Mar 19 09:29:55 2020] block nbd1: Other side returned error (30)
[Thu Mar 19 09:29:55 2020] block nbd1: Other side returned error (30)
[Thu Mar 19 09:29:55 2020] block nbd1: Other side returned error (30)
[Thu Mar 19 09:29:55 2020] block nbd1: Other side returned error (30)
[Thu Mar 19 09:29:55 2020] block nbd1: Other side returned error (30)
[Thu Mar 19 09:30:02 2020] block nbd1: Other side returned error (30)
[Thu Mar 19 09:30:02 2020] blk_update_request: 95 callbacks suppressed
[Thu Mar 19 09:30:02 2020] blk_update_request: I/O error, dev nbd1, sector 0
[Thu Mar 19 09:30:02 2020] buffer_io_error: 94 callbacks suppressed
[Thu Mar 19 09:30:02 2020] Buffer I/O error on dev nbd1, logical block
0, async page read
[Thu Mar 19 09:30:02 2020] block nbd1: Other side returned error (30)
[Thu Mar 19 09:30:02 2020] blk_update_request: I/O error, dev nbd1, sector 1
[Thu Mar 19 09:30:02 2020] Buffer I/O error on dev nbd1, logical block
1, async page read
[Thu Mar 19 09:30:02 2020] block nbd1: Other side returned error (30)
[Thu Mar 19 09:30:02 2020] blk_update_request: I/O error, dev nbd1, sector 2
[Thu Mar 19 09:30:02 2020] Buffer I/O error on dev nbd1, logical block
2, async page read
[Thu Mar 19 09:30:02 2020] block nbd1: Other side returned error (30)
[Thu Mar 19 09:30:02 2020] blk_update_request: I/O error, dev nbd1, sector 3
[Thu Mar 19 09:30:02 2020] Buffer I/O error on dev nbd1, logical block
3, async page read
[Thu Mar 19 09:30:02 2020] block nbd1: Other side returned error (30)
[Thu Mar 19 09:30:02 2020] blk_update_request: I/O error, dev nbd1, sector 4
[Thu Mar 19 09:30:02 2020] Buffer I/O error on dev nbd1, logical block
4, async page read
[Thu Mar 19 09:30:02 2020] block nbd1: Other side returned error (30)
[Thu Mar 19 09:30:02 2020] blk_update_request: I/O error, dev nbd1, sector 5
[Thu Mar 19 09:30:02 2020] Buffer I/O error on dev nbd1, logical block
5, async page read
[Thu Mar 19 09:30:02 2020] block nbd1: Other side returned error (30)
[Thu Mar 19 09:30:02 2020] blk_update_request: I/O error, dev nbd1, sector 6
[Thu Mar 19 09:30:02 2020] Buffer I/O error on dev nbd1, logical block
6, async page read
[Thu Mar 19 09:30:02 2020] block nbd1: Other side returned error (30)
[Thu Mar 19 09:30:02 2020] blk_update_request: I/O error, dev nbd1, sector 7
[Thu Mar 19 09:30:02 2020] Buffer I/O error on dev nbd1, logical block
7, async page read
Do i have to stop the replaying or how can i mount the image on the
backup cluster?
Thanks,
Michael
Hi Lenz,
Yeah, I saw the PR and i still hit the issue today. In the meantime while
@Volker investigates, is there a workaround to bring dashboard back? Or
should I wait for @Volker's investigation?
P.S.: I fopund this too: https://tracker.ceph.com/issues/44271
Thanks,
Gencer.
======= REPLY ABOVE =======
Hi Gencer,
On 2020-03-22 09:37, Gencer W. Genç wrote:
...
Hmm, I thought we had fixed that bug by merging the following fix:
https://github.com/ceph/ceph/pull/33513
@Volker, would you mind taking a look at this? Thanks in advance!
Lenz
--
SUSE Software Solutions Germany GmbH - Maxfeldstr. 5 - 90409 Nuernberg
GF: Felix Imendörffer, HRB 36809 (AG Nürnberg)
Hi all,
I have a large distributed ceph cluster that recently broke with all PGs housed at a single site getting marked as 'unknown' after a run of the Ceph Ansible playbook (which was being used to expand the cluster at a third site). Is there a way to recover the location of PGs in this state, or a way to fall back to a previous config where things were working? Or a way to scan the OSDs to determine which PGs are housed there? All the OSDs are still in place and reporting as healthy, it's just the PG locations that are missing. For info: the ceph cluster is used to provide a single shared CephFS mount for a distributed batch cluster, and it includes workers and pools of OSDs from three different OpenStack clouds.
Ceph version: 13.2.8
Here is the system health:
[root@euclid-edi-ctrl-0 ~]# ceph -s
cluster:
id: 0fe7e967-ecd6-46d4-9f6b-224539073d3b
health: HEALTH_WARN
insufficient standby MDS daemons available
1 MDSs report slow metadata IOs
Reduced data availability: 1024 pgs inactive
6 slow ops, oldest one blocked for 244669 sec, mon.euclid-edi-ctrl-0 has slow ops
too few PGs per OSD (26 < min 30)
services:
mon: 4 daemons, quorum euclid-edi-ctrl-0,euclid-cam-proxy-0,euclid-imp-proxy-0,euclid-ral-proxy-0
mgr: euclid-edi-ctrl-0(active), standbys: euclid-imp-proxy-0, euclid-cam-proxy-0, euclid-ral-proxy-0
mds: cephfs-2/2/2 up {0=euclid-ral-proxy-0=up:active,1=euclid-cam-proxy-0=up:active}
osd: 269 osds: 269 up, 269 in
data:
pools: 5 pools, 5120 pgs
objects: 30.54 M objects, 771 GiB
usage: 3.8 TiB used, 41 TiB / 45 TiB avail
pgs: 20.000% pgs unknown
4095 active+clean
1024 unknown
1 active+clean+scrubbing
OSD Pools:
[root@euclid-edi-ctrl-0 ~]# ceph osd lspools
1 cephfs_data
2 cephfs_metadata
3 euclid_cam
4 euclid_ral
5 euclid_imp
[root@euclid-edi-ctrl-0 ~]# ceph pg dump_pools_json
dumped pools
POOLID OBJECTS MISSING_ON_PRIMARY DEGRADED MISPLACED UNFOUND BYTES OMAP_BYTES* OMAP_KEYS* LOG DISK_LOG
5 0 0 0 0 0 0 0 0 0 0
1 16975540 0 0 0 0 79165311663 0 0 6243475 6243475
2 5171099 0 0 0 0 551991405 126879876 270829 3122183 3122183
3 8393436 0 0 0 0 748466429315 0 0 1556647 1556647
4 0 0 0 0 0 0 0 0 0 0
[root@euclid-edi-ctrl-0 ~]# ceph health detail
...
PG_AVAILABILITY Reduced data availability: 1024 pgs inactive
pg 4.3c8 is stuck inactive for 246794.767182, current state unknown, last acting []
pg 4.3ca is stuck inactive for 246794.767182, current state unknown, last acting []
pg 4.3cb is stuck inactive for 246794.767182, current state unknown, last acting []
pg 4.3d0 is stuck inactive for 246794.767182, current state unknown, last acting []
pg 4.3d1 is stuck inactive for 246794.767182, current state unknown, last acting []
pg 4.3d2 is stuck inactive for 246794.767182, current state unknown, last acting []
pg 4.3d3 is stuck inactive for 246794.767182, current state unknown, last acting []
pg 4.3d4 is stuck inactive for 246794.767182, current state unknown, last acting []
pg 4.3d5 is stuck inactive for 246794.767182, current state unknown, last acting []
pg 4.3d6 is stuck inactive for 246794.767182, current state unknown, last acting []
pg 4.3d7 is stuck inactive for 246794.767182, current state unknown, last acting []
pg 4.3d8 is stuck inactive for 246794.767182, current state unknown, last acting []
pg 4.3d9 is stuck inactive for 246794.767182, current state unknown, last acting []
pg 4.3da is stuck inactive for 246794.767182, current state unknown, last acting []
...
[root@euclid-edi-ctrl-0 ~]# ceph pg map 4.3c8
osdmap e284992 pg 4.3c8 (4.3c8) -> up [] acting []
Cheers,
Mark