Hello,
I am trying to set up a test cluster with the cephadm tool on Ubuntu 20.04 nodes. Following the directions at https://docs.ceph.com/en/octopus/cephadm/install/, I have set up the monitor and manager on a management node, and added two hosts that I want to use for storage. All storage devices present on those nodes are included in the output of `ceph orch device ls`, and all are marked “available”. However, when I try to deploy OSDs with `ceph orch apply osd -i spec.yml`, following the example for HDD+SSD storage spec at https://docs.ceph.com/en/latest/cephadm/drivegroups/#the-simple-case, I see the new service in the output of `ceph orch ls`, but it is not running anywhere (“0/2”), and no OSDs get created. I am not sure how to debug this, and any pointers would be much appreciated.
Thank you,
Davor
Output:
```
# ceph orch host ls
INFO:cephadm:Inferring fsid 150b5f1a-64bf-11eb-a7e9-d96bd5ac4db3
INFO:cephadm:Inferring config /var/lib/ceph/150b5f1a-64bf-11eb-a7e9-d96bd5ac4db3/mon.sps-head/config
INFO:cephadm:Using recent ceph image ceph/ceph:v15
HOST ADDR LABELS STATUS
sps-head sps-head mon
sps-st1 sps-st1 mon
sps-st2 sps-st2
# ceph orch device ls
INFO:cephadm:Inferring fsid 150b5f1a-64bf-11eb-a7e9-d96bd5ac4db3
INFO:cephadm:Inferring config /var/lib/ceph/150b5f1a-64bf-11eb-a7e9-d96bd5ac4db3/mon.sps-head/config
INFO:cephadm:Using recent ceph image ceph/ceph:v15
Hostname Path Type Serial Size Health Ident Fault Available
sps-head /dev/nvme0n1 ssd S5JXNS0N504446R 1024G Unknown N/A N/A Yes
sps-st1 /dev/nvme0n1 ssd S5JXNS0N504948D 1024G Unknown N/A N/A Yes
sps-st1 /dev/nvme1n1 ssd S5JXNS0N504958T 1024G Unknown N/A N/A Yes
sps-st1 /dev/sdb hdd 5000cca28ed36018 14.0T Unknown N/A N/A Yes
sps-st1 /dev/sdc hdd 5000cca28ed353e5 14.0T Unknown N/A N/A Yes
[…]
# cat /mnt/osd_spec.yml
service_type: osd
service_id: default_drive_group
placement:
host_pattern: 'sps-st[1-6]'
data_devices:
rotational: 1
db_devices:
rotational: 0
[**After running `ceph orch apply osd -i spec.yml`:**]
# ceph orch ls
NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID
alertmanager 1/1 9m ago 6h count:1 docker.io/prom/alertmanager:v0.20.0 0881eb8f169f
crash 3/3 9m ago 6h * docker.io/ceph/ceph:v15 5553b0cb212c
grafana 1/1 9m ago 6h count:1 docker.io/ceph/ceph-grafana:6.6.2 a0dce381714a
mgr 2/2 9m ago 6h count:2 docker.io/ceph/ceph:v15 5553b0cb212c
mon 1/2 9m ago 3h label:mon docker.io/ceph/ceph:v15 5553b0cb212c
node-exporter 0/3 - - * <unknown> <unknown>
osd.default_drive_group 0/2 - - sps-st[1-6] <unknown> <unknown>
prometheus 1/1 9m ago 6h count:1 docker.io/prom/prometheus:v2.18.1 de242295e225
[** I am not sure why neither “osd.default_drive_group” nor “node-exporter” are running anywhere. How do I check that? **]
# ceph osd tree
INFO:cephadm:Inferring fsid 150b5f1a-64bf-11eb-a7e9-d96bd5ac4db3
INFO:cephadm:Inferring config /var/lib/ceph/150b5f1a-64bf-11eb-a7e9-d96bd5ac4db3/mon.sps-head/config
INFO:cephadm:Using recent ceph image ceph/ceph:v15
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0 root default
# ceph orch --version
ceph version 15.2.8 (bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55) octopus (stable)
```
Hello,
We have a 3 geo locational freshly installed multisite setup with an upgraded octopus from 15.2.5 to 15.2.7.
We have 6 osd nodes, 3 mon/mgr/rgw in each dc, full SSD, 3 ssd is using 1 nvme for journaling. Each zone backed with 3 RGW, one on each mon/mgr node.
The goal is to replicate 2 (currently) big buckets in the zonegroup but it only works if I disable and reenable the bucket sync.
Big buckets means, one bucket is presharded for 9000 shards (9 billions objects), the 2nd bucket that I'm detailing in this ticket 24000 (24 billions objects) shards.
Once picked up the objects (not all, only the ones that was on the source site at that given time when it was enabled) it will slows down a lot from 100.000 objects / 15 minutes in and 10GB/15 minutes to 50 objects/4 hours.
Once it synchronized after enabled/disabled, it maxing out the osd nodes with NVME/SSD drives with some operation which I don't know what is it. Let me show you the symptoms below.
Let me summarize as much as I can.
We have 1 realm, in this realm we have 1 zonegroup (please help me to check if the sync policies are ok) and in this zonegroup we have 1 cluster in US, 1 in Hong Kong (master) and 1 in Singapore.
Here is the realm, zonegroup and zones definition: https://pastebin.com/raw/pu66tqcf
Let me show you one enable/disable operation when I've disabled on the HKG master site the pix-bucket and enabled it.
In this screenshot: https://i.ibb.co/WNC0gNQ/6nodes6day.png
the highlighted area is when the data sync is running after disable enable. You can see almost no operation. You can see also when sync is not running, the green and yellow is the NVME drive rocksdb+wal drives. The screenshot represents the 6 Singapore nodes SSD/NVME disk utilizations. The first node you can see in the last hours no green and yellow, it's because I've reinstalled in that nodes all the osds to not use NVME.
In the following 1st screenshot you can see the HKG object usage where the user is uploading the objects. 2nd screenshot the SGP one where you can see the highlighted area is the disable/enable operation.
HKG where user upload: https://i.ibb.co/vj2VFYP/pixhkg6d.png
SGP where sync happened: https://i.ibb.co/w41rmQT/pixsgp6d.png
Let me show you some troubleshooting logs regarding bucket sync status, cluster sync status, reshard list (which might be because of previous testing), sync error list
https://pastebin.com/raw/TdwiZFC1
The issue might be very similar to this issue:
https://tracker.ceph.com/issues/21591
Where I should move forward or how can I help you to provide more logs to help me please?
Thank you in advance
________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
I'm trying to resize a block device using "rbd --resize". The block device
is pretty huge (100+ TB). The resize has been running for over a week, and
I have no idea if it's actually doing anything, or if it's just hanging or
in some infinite loop. Is there any way of getting a progress report from
the resize to get an idea if this is ever going to finish?
Thanks!
This is a odd one. I don't hit it all the time so I don't think its expected behavior.
Sometimes I have no issues enabling rbd-mirror snapshot mode on a rbd when its in use by a KVM VM. Other times I hit the following error, the only way I can get around it is to power down the KVM VM.
root@Ccscephtest1:~# rbd mirror image enable CephTestPool1/vm-101-disk-0 snapshot
2021-01-29T09:29:07.875-0500 7f1e99ffb700 -1 librbd::mirror::snapshot::CreatePrimaryRequest: 0x7f1e7c012440 handle_create_snapshot: failed to create mirror snapshot: (22) Invalid argument
2021-01-29T09:29:07.875-0500 7f1e99ffb700 -1 librbd::mirror::EnableRequest: 0x5597667fd200 handle_create_primary_snapshot: failed to create initial primary snapshot: (22) Invalid argument
2021-01-29T09:29:07.875-0500 7f1ea559f3c0 -1 librbd::api::Mirror: image_enable: cannot enable mirroring: (22) Invalid argument
Hi all,
I have a cluster with 116 disks (24 new disks of 16TB added in december
and the rest of 8TB) running nautilus 14.2.16.
I moved (8 month ago) from crush_compat to upmap balancing.
But the cluster seems not well balanced, with a number of pgs on the 8TB
disks varying from 26 to 52 ! And an occupation from 35 to 69%.
The recent 16 TB disks are more homogeneous with 48 to 61 pgs and space
between 30 and 43%.
Last week, I realized that some osd were maybe not using upmap because I
did a ceph osd crush weight-set ls and got (compat) as result.
Thus I ran a ceph osd crush weight-set rm-compat which triggered some
rebalancing. Now there is no more recovery for 2 days, but the cluster
is still unbalanced.
As far as I understand, upmap is supposed to reach an equal number of
pgs on all the disks (I guess weighted by their capacity).
Thus I would expect more or less 30 pgs on the 8TB disks and 60 on the
16TB and around 50% usage on all. Which is not the case (by far).
The problem is that it impact the free available space in the pools
(264Ti while there is more than 578Ti free in the cluster) because free
space seems to be based on space available before the first osd will be
full !
Is it normal ? Did I missed something ? What could I do ?
F.
Hi,
I’ve never seen in our multisite sync status healthy output, almost all the sync shards are recovering.
What can I do with recovering shards?
We have 1 realm, 1 zonegroup and inside the zonegroup we have 3 zones in 3 different geo location.
We are using octopus 15.2.7 for bucket sync with symmetrical replication.
The user is at the moment migrating their data and the sites are always behind which is replicated from the place where it was uploaded.
I’ve restarted all rgw and disable / enable bucket sync, it started to work, but I think when it comes to close sync it will stop again due to the recovering shards.
Any idea?
Thank you
________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.