If I remember correctly, being able to configure the rocksdb level
sizes was targeted for Octopus.
I was wondering if this feature ever made it into the code as it would
be useful when you want to use a drive smaller than 300G for the
WAL/DB.
Hello,
Unfortunately, due to the COVID-19 pandemic, the Ceph Foundation is
looking into running future Ceph Days virtually. We have created a
survey to gather feedback from the community on how these events should run.
You can access the survey here: https://survey.zohopublic.com/zs/jsCsIn
We are interested in finding organizers to help with these events.
Please opt-in the form if you're interested.
--
Mike Perez
He/Him
Ceph Community Manager
Red Hat Los Angeles <https://www.redhat.com>
thingee(a)redhat.com <mailto:thingee@redhat.com>
M: 1-951-572-2633 <tel:1-951-572-2633> IM: IRC Freenode/OFTC: thingee
494C 5D25 2968 D361 65FB 3829 94BC D781 ADA8 8AEA
@Thingee <https://twitter.com/thingee>
<https://www.redhat.com>
<https://redhat.com/summit>
Hello, Ceph users,
does anybody use Ceph on recently released CentOS 8? Apparently there are
no el8 packages neither at download.ceph.com, nor in the native CentOS package
tree. I am thinking about upgrading my cluster to C8 (because of other
software running on it apart from Ceph). Do el7 packages simply work?
Can they be rebuilt using rpmbuild --rebuild? Or is running Ceph on
C8 more complicated than that?
Thanks,
-Yenya
--
| Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> |
| http://www.fi.muni.cz/~kas/ GPG: 4096R/A45477D5 |
sir_clive> I hope you don't mind if I steal some of your ideas?
laryross> As far as stealing... we call it sharing here. --from rcgroups
Hello Team,
We've integrated Ceph cluster storage with Kubernetes and provisioning
volumes through rbd-provisioner. When we're creating volumes from yaml
files in Kubernetes, pv > pvc > mounting to pod, In kubernetes end pvc are
showing as meaningful naming convention as per yaml file defined. But in
ceph cluster, rbd image name is creating with dynamic uid.
During troubleshooting time, this will be tedious to find exact rbd image.
Please find the provisioning logs in below pasted snippet.
kubectl get pods,pv,pvc
NAME READY STATUS RESTARTS AGE
pod/sleepypod 1/1 Running 0 4m9s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON
AGE
persistentvolume/pvc-cd37d2d6-cecc-4a05-9736-c8d80abde7f5 1Gi RWO Delete
Bound default/test-dyn-pvc ceph-rbd 4m9s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/test-dyn-pvc Bound
pvc-cd37d2d6-cecc-4a05-9736-c8d80abde7f5 1Gi RWO ceph-rbd 4m11s
*rbd-provisioner logs*
I1121 10:59:15.009012 1 provision.go:132] successfully created rbd image
"kubernetes-dynamic-pvc-f4eac482-0c4d-11ea-8d70-8a582e0eb4e2" I1121
10:59:15.009092 1 controller.go:1087] provision "default/test-dyn-pvc"
class "ceph-rbd": volume "pvc-cd37d2d6-cecc-4a05-9736-c8d80abde7f5"
provisioned I1121 10:59:15.009138 1 controller.go:1101] provision
"default/test-dyn-pvc" class "ceph-rbd": trying to save persistentvvolume
"pvc-cd37d2d6-cecc-4a05-9736-c8d80abde7f5" I1121 10:59:15.020418 1
controller.go:1108] provision "default/test-dyn-pvc" class "ceph-rbd":
persistentvolume "pvc-cd37d2d6-cecc-4a05-9736-c8d80abde7f5" saved I1121
10:59:15.020476 1 controller.go:1149] provision "default/test-dyn-pvc"
class "ceph-rbd": succeeded I1121 10:59:15.020802 1 event.go:221]
Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default",
Name:"test-dyn-pvc", UID:"cd37d2d6-cecc-4a05-9736-c8d80abde7f5",
APIVersion:"v1", ResourceVersion:"24545639", FieldPath:""}): type: 'Normal'
reason: 'ProvisioningSucceeded' Successfully provisioned volume
pvc-cd37d2d6-cecc-4a05-9736-c8d80abde7f5
*rbd image details in Ceph cluster end*
rbd -p kube ls --long
NAME SIZE PARENT FMT PROT LOCK
kubernetes-dynamic-pvc-f4eac482-0c4d-11ea-8d70-8a582e0eb4e2 1 GiB 2
is there way to setup proper naming convention for rbd image as well during
kubernetes deployment itself.
Kubernetes version: v1.15.5
Ceph cluster version: 14.2.2 nautilus (stable)
*Best Regards,*
*Palanisamy*
Hi list,
We have balancer plugin in upmap mode running for a while now:
health: HEALTH_OK
pgs:
1973 active+clean
194 active+remapped+backfilling
73 active+remapped+backfill_wait
recovery: 588 MiB/s, 343 objects/s
Our objects are stored on EC pool. We got an PG_NOT_DEEP_SCRUBBED
alert and have noticed that no scrubbing (literally zero) was done
since the balancing started. Has anyone some ideas why this is
happening?
"pg deep-scrub <pgid>" did not help.
Thanks!
--
Vytenis
Hi guys,
I have a Ceph Cluster up and running and cephfs created (all done by
ceph-ansible).
I following the guide to mount the volume on CentOS7 via FUSE.
When I mount the volume as the default admin (client.admin), everything
works fine just like normal file system.
Then I created a new client just for FUSE mount purpose, follow this guide:
https://docs.ceph.com/docs/master/cephfs/mount-prerequisites/
The ceph fs authorize command created a new client with the following caps:
[client.wp_test]
key = AQDAEc9ebLXjGhAAxEGqTuTvCOoN30g4UzF5jw==
caps mds = "allow rw"
caps mon = "allow r"
caps osd = "allow rw tag cephfs data=cephfs"
It can mount the volume, and I can touch a file. But when I tried write
data, such as editing a new text or cat a file, I got some read-only error
or
[root@mon-6-26 ceph_root]# cat test.txt
cat: test.txt: Operation not permitted
if I modified the OSD cap to "allow *", the it allows write again.
Can anyone suggest what have been done incorrectly?
We are using
14.2.9 (581f22da52345dba46ee232b73b990f06029a2a0)
nautilus (stable)
Cheers,
Derrick
Hi,
Is there a way to recover UUID from the partition?
Someone mapped in fstab to /sd* not UUID and all the metadata is gone.
The data is there , just can't mount it to access the data.
Any idea how to get it back or determine what was it before?
Thank you.
________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.