Dear all,
After enabling "allow_standby_replay" on our cluster we are getting
(lots) of identical errors on the client /var/log/messages like
Apr 29 14:21:26 hal kernel: ceph: mdsmap_decode got incorrect
state(up:standby-replay)
We are using the ml kernel 5.6.4-1.el7 on Scientific Linux 7.8
Cluster and client are running Ceph v14.2.9
Setting was enabled with:
# ceph fs set cephfs allow_standby_replay true
[root@ceph-s1 ~]# ceph mds stat
cephfs:1 {0=ceph-s3=up:active} 1 up:standby-replay 2 up:standby
Is this something to worry about, or should we just disable
allow_standby_replay ?
any advice appreciated,
many thanks
Jake
Note: I am working from home until further notice.
For help, contact unixadmin(a)mrc-lmb.cam.ac.uk
--
Dr Jake Grimmett
Head Of Scientific Computing
MRC Laboratory of Molecular Biology
Francis Crick Avenue,
Cambridge CB2 0QH, UK.
Phone 01223 267019
Mobile 0776 9886539
Does anyone know of any new statements from the ceph community or foundation regarding EAR?
I read the legal page of ceph.com and mentioned some information.
https://ceph.com/legal-page/terms-of-service/
But I am still not sure, if my clients and I are within the scope of the entity list, whether the use of ceph community edition complies with the corresponding laws.
Whether it affects the current release and future releases.
It is required to use the right server and port settings to enjoy all the benefits of the Bellsouth email service. It is also recommended everyone to configure Bellsouth Email Settings and correctly and appropriately. There are few users unable to setup Bellsouth email on Android phone, iPhone, or computer device. For such helpless candidates, we provide helpline number by which they connect with top-most technicians for quality assistance. Once you contact to tech-savvy, your Bellsouth email settings will easily be configured in a second. https://www.emailsupport.us/blog/bellsouth-email-settings-for-outlook/
It is required to use the right server and port settings to enjoy all the benefits of the Bellsouth email service. It is also recommended everyone to configure Bellsouth Email Settings and correctly and appropriately. There are few users unable to setup Bellsouth email on Android phone, iPhone, or computer device. For such helpless candidates, we provide helpline number by which they connect with top-most technicians for quality assistance. Once you contact to tech-savvy, your Bellsouth email settings will easily be configured in a second. https://www.emailsupport.us/blog/bellsouth-email-settings-for-outlook/
On my relatively new Octopus cluster, I have one PG that has been
perpetually stuck in the 'unknown' state. It appears to belong to the
device_health_metrics pool, which was created automatically by the mgr
daemon(?).
The OSDs that the PG maps to are all online and serving other PGs. But
when I list the PGs that belong to the OSDs from 'ceph pg map', the
offending PG is not listed.
# ceph pg dump pgs | grep ^1.0
dumped pgs
1.0 0 0 0 0 0
0 0 0 0 0 unknown
2020-08-08T09:30:33.251653-0500 0'0 0:0
[] -1 [] -1
0'0 2020-08-08T09:30:33.251653-0500 0'0
2020-08-08T09:30:33.251653-0500 0
# ceph osd pool stats device_health_metrics
pool device_health_metrics id 1
nothing is going on
# ceph pg map 1.0
osdmap e7199 pg 1.0 (1.0) -> up [41,40,2] acting [41,0]
What can be done to fix the PG? I tried doing a 'ceph pg repair 1.0',
but that didn't seem to do anything.
Is it safe to try to update the crush_rule for this pool so that the PG
gets mapped to a fresh set of OSDs?
--Mike
Hello All,
I have a Nautilus (14.2.11) cluster which is running fine on CentOS 7
servers. 4 OSD nodes, 3 MON/MGR hosts. Now I wanted to enable iSCSI
gateway functionality to be used by some Solaris and FreeBSD clients. I
followed the instructions under
https://docs.ceph.com/docs/nautilus/rbd/iscsi-target-cli-manual-install
and
https://docs.ceph.com/docs/nautilus/rbd/iscsi-target-cli/
I setup two iSCSI targets and they both show up with gwcli on both gateways:
[root@osd1 ~]# gwcli ls
...
o- gateways ..................... [Up: 2/2, Portals: 2]
| o- osd1.mydomain.pri ......... [172.29.1.171 (UP)]
| o- osd2.mydomain.pri ......... [172.29.1.172 (UP)]
...
tcmu-runner, rbd-target-gw and rbd-target-api are active (running) on
both gateways, there is no firewall and SELinux is disabled but on the
dashboard the state of the gateways is "down". and ceph-mgr.mon2.log shows
mgr[dashboard] iscsi REST API failed GET req status: 403
Any hints? Thank you.
Best
Willi
On Wed, Aug 26, 2020 at 10:33 AM Marc Roos <M.Roos(a)f1-outsourcing.eu> wrote:
>
> >>
> >>
> >> I was wondering if anyone is using ceph csi plugins[1]? I would like
> to
> >> know how to configure credentials, that is not really described for
> >> testing on the console.
> >>
> >> I am running
> >> ./csiceph --endpoint unix:///tmp/mesos-csi-XSJWlY/endpoint.sock
> --type
> >> rbd --drivername rbd.csi.ceph.com --nodeid test
> >>
> >> Connection is fine
> >> [ ~]# csc identity plugin-info
> >> "rbd.csi.ceph.com" "canary"
> >>
> >> However I have no idea how to configure the clientid, pool etc in
> the
> >> volumes
> >>
> >>
> >> [1]
> >> https://github.com/ceph/ceph-csi
> >>
> >> Ps. I am not using kubernetes.
> >
> >The credentials and Ceph cluster configuration metadata are passed via
> >the RPC calls as per the CSI spec. In k8s, these details would be
> >stored in StorageClass and Secret objects.
>
> So there is no way of testing this driver via the commandline, with
> some generic grpc client?
>
You would need some way to inject the correct/expected gRPC calls as
per the CSI spec [1]. Data from the StorageClass mostly gets
translated and sent via the "parameters" argument and data from the
Secret gets translated and sent via the "secrets" argument of various
gRPC requests.
[1] https://github.com/container-storage-interface/spec/blob/master/spec.md
--
Jason
On Wed, Aug 26, 2020 at 10:11 AM Marc Roos <M.Roos(a)f1-outsourcing.eu> wrote:
>
>
>
> I was wondering if anyone is using ceph csi plugins[1]? I would like to
> know how to configure credentials, that is not really described for
> testing on the console.
>
> I am running
> ./csiceph --endpoint unix:///tmp/mesos-csi-XSJWlY/endpoint.sock --type
> rbd --drivername rbd.csi.ceph.com --nodeid test
>
> Connection is fine
> [ ~]# csc identity plugin-info
> "rbd.csi.ceph.com" "canary"
>
> However I have no idea how to configure the clientid, pool etc in the
> volumes
>
>
> [1]
> https://github.com/ceph/ceph-csi
>
> Ps. I am not using kubernetes.
The credentials and Ceph cluster configuration metadata are passed via
the RPC calls as per the CSI spec. In k8s, these details would be
stored in StorageClass and Secret objects.
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>
--
Jason
I was wondering if anyone is using ceph csi plugins[1]? I would like to
know how to configure credentials, that is not really described for
testing on the console.
I am running
./csiceph --endpoint unix:///tmp/mesos-csi-XSJWlY/endpoint.sock --type
rbd --drivername rbd.csi.ceph.com --nodeid test
Connection is fine
[ ~]# csc identity plugin-info
"rbd.csi.ceph.com" "canary"
However I have no idea how to configure the clientid, pool etc in the
volumes
[1]
https://github.com/ceph/ceph-csi
Ps. I am not using kubernetes.