I want to perform non cephadm upgrade from Quincy to Reef. Reason for not using cephadm is do not want to go for ceph in containers.
My test deployment is as given below.
Total cluster hosts : 5
ceph-mon hosts: 3
ceph-mgr hosts: 3 (ceph-mgr active on one node, and other ceph-mgr each on ceph-mon host)
ceph-mds : 1
ceph-osd : 5 (one ceph-osd on each of the host in the cluster.)
While I try to follow the steps - https://docs.ceph.com/en/latest/releases/reef/#upgrading-non-cephadm-cluste… - on the step - Upgrade monitors by installing the new packages and restarting the monitor daemons. when I try to upgrade only ceph-mon using "apt upgrade ceph-mon" command it upgrades all packages including ceph-mgr, ceph-mds, ceph-osd etc. as ceph-mon package has dependency on these packages.
My question is - does this mean I need to upgrade all ceph packages (ceph, ceph-common) and restart only monitor daemon first? Or there is any way I can upgrade only ceph-mon pacakge first, then ceph-mgr, ceph-osd and so on?
Hi Hualong and llya,
Thanks for your help.
More info:
I am trying to buid ceph on Milkv Pioneer board (RISCV arch, OS is fedora-riscv 6.1.55).
The ceph code being used was downloaded from github last week (master branch)
Currently I am working on environment cleanup (I suspect my work environment is not clean).
I will try to switch to V19.0.0 and rebuild.
Will let you know if further help needed.
Thanks.
Best Regards,
Dongchuan
Ilya Dryomov<idryomov(a)gmail.xn--com> -8s7w 2024年3月6日 周三 21:13 写道:
On Wed, Mar 6, 2024 at 7:41 AM Feng, Hualong <hualong.feng(a)intel.com> wrote:
>
> Hi Dongchuan
>
> Could I know which version or which commit that you are building and your environment: system, CPU, kernel?
>
> ./do_cmake.sh -DCMAKE_BUILD_TYPE=RelWithDebInfo this command should be OK without QAT.
Hi Hualong,
I don't think this is true. In main, both WITH_QATLIB and WITH_QATZIP
default to ON unless the system is aarch64. IIRC I needed to append -D
WITH_QATLIB=OFF -D WITH_QATZIP=OFF to build without QAT.
Thanks,
Ilya
>
> Thanks
> -Hualong
>
> > -----Original Message-----
> > From: 张东川 <zhangdongchuan(a)metastonecorp.com>
> > Sent: Wednesday, March 6, 2024 9:51 AM
> > To: ceph-users <ceph-users(a)ceph.io>
> > Subject: [ceph-users] How to build ceph without QAT?
> >
> > Hi guys,
> >
> >
> > I tried both following commands.
> > Neither of them worked.
> >
> >
> > "./do_cmake.sh -DCMAKE_BUILD_TYPE=RelWithDebInfo -DWITH_QAT=OFF
> > -DWITH_QATDRV=OFF -DWITH_QATZIP=OFF"
> > "ARGS="-DWITH_QAT=OFF -DWITH_QATDRV=OFF -
> > DWITH_QATZIP=OFF" ./do_cmake.sh -
> > DCMAKE_BUILD_TYPE=RelWithDebInfo"
> >
> >
> > I still see errors like:
> > make[1]: *** [Makefile:4762:
> > quickassist/lookaside/access_layer/src/sample_code/performance/framew
> > ork/linux/user_space/cpa_sample_code-cpa_sample_code_utils.o] Error 1
> >
> >
> >
> >
> > So what's the proper way to configure build commands?
> > Thanks a lot.
> >
> >
> > Best Regards,
> > Dongchuan
> > _______________________________________________
> > ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an email
> > to ceph-users-leave(a)ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io
I have implemented a ceph cluster with cephadm which has three monitors and
three OSDs
each node have one interface 192.168.0.0/24 network.
I want to change the address of the machines to the range 10.4.4.0/24.
Is there a solution for this change without data loss and failure?
i change the pubic_network in mon and change the ip node but its not worked
.
how can i sovle this problem?
```
ceph orch host ls
HOST ADDR LABELS STATUS
ceph-01 192.168.0.130 _admin,rgw
ceph-02 192.168.0.131 _admin,rgw
ceph-03 192.168.0.132 _admin,rgw
3 hosts in cluster
````
[root@ceph-01 ~]# ceph config get mon public_network
192.168.0.0/24
````
[root@ceph-01 ~]# ceph orch ls
NAME PORTS RUNNING REFRESHED AGE
PLACEMENT
alertmanager ?:9093,9094 1/1 112s ago 9M
count:1
ceph-exporter 3/3 114s ago 8M *
crash 3/3 114s ago 9M *
grafana ?:3000 1/1 112s ago 8M
count:1
mgr 2/2 113s ago 9M
count:2
mon 3/3 114s ago 8M
count:3
node-exporter ?:9100 3/3 114s ago 9M *
osd.dashboard-admin-1685787597651 6 114s ago 8M *
prometheus ?:9095 1/1 112s ago 3M
count:1
````
Hi Cephers,
These are the topics covered in today's meeting:
- *Releases*
- *Hot fixes Releases*
- *18.2.2*
- https://github.com/ceph/ceph/pull/55491 - reef: mgr/prometheus: fix
orch check to prevent Prometheus crash
- https://github.com/ceph/ceph/pull/55709 - reef: debian/*.postinst: add
adduser as a dependency and specify --home when adduser
- https://github.com/ceph/ceph/pull/55712 - reef: src/osd/OSDMap.cc: Fix
encoder to produce same bytestream
- [Laura] When/who can upgrade the LRC for this release? (Dan will after
we do some last checks today)
- Gibba has been upgraded with no apparent issues
- *17.2.8* (on hold based on whether the osdmap fix requirement) - *No
longer needed*
- osdmap fix (not needed:
https://github.com/ceph/ceph/blob/quincy/src/osd/OSDMap.cc#L3087-L3089)
- Rook requests to include this c-v fix that blocks OSDs in some
scenarios (not worthy of its own hotfix, just please include if we do the
hotfix)
- https://github.com/ceph/ceph/pull/54522 - quincy: ceph-volume: fix a
regression in raw list
- As crc fix is not needed, the Rook request can be included in a
regular quincy release
- *Regular releases*
- *18.2.3* - exporter fixes for rook and debian-derived users make this
more urgent than quincy
- mgr usage of pyO3/cryptography an issue for debian - and possibly
centos9 (https://tracker.ceph.com/issues/64213#note-2) - see notes from
02/07
- Any updates on potentially dropping modules or another fix? Adam?
- Squid *19.1.0*
- CephFS waiting for 2 feature PR
- RGW PRs
- NVMe? To be confirmed with Aviv
- squid blockers:
- build centos 9 containers:
https://github.com/ceph/ceph-container/pull/2183
- ceph-object-corpus:
https://github.com/ceph/ceph-object-corpus/pull/17 (testing
in https://github.com/ceph/ceph/pull/54735)
- Milestone for squid blockers (use to tag blockers for the first 19.1.0
RC): https://github.com/ceph/ceph/milestone/21
- Squid RCs and community testing
- https://pad.ceph.com/p/squid_scale_testing
- Target date March ~20
- *17.2.9*
- need jammy builds for quincy before a squid release. maybe we can just
build them for the 17.2.7 release? (do you mean the 17.2.8 release?)
- *Meeting time - *change days to Monday or Thursday? (added by Josh -
who has a conflict on Wednesdays now)
- Thursday has several conflicting community meetings
- Any objections to Monday at the same time?
- Note the change to US daylight savings next week
- Let's do a poll (Doodle)
- *debian-reef_OLD email thread "[ceph-users] debian-reef_OLD?"*
- Fixed by Yuri
- *CDM APAC tonight*:
https://tracker.ceph.com/projects/ceph/wiki/CDM_06-MAR-2024
- *Sepia Lab*:
- PSA: https://github.com/ceph/ceph/pull/55820 merged (squid crontab
additions and overhaul to nightlies)
- New grafana widget for smithi node utilization:
-
https://grafana-route-grafana.apps.os.sepia.ceph.com/d/teuthology/teutholog…
- (Basically: unlocked machine * hours / total machine * hours )
- [Zac] *ceph-exporter release notes question from Jan Horacek* (from
the upstream community)
- Route to Juanmi Olmo
- [Zac] - *Eugen Block's question about removing sensitive information
from ceph-users mailing list*
- No easy way to request/remove sensitive information.
- [Zac] - *Anthony D'Atri submits Index HQ in Toronto as a possible
venue for Cephalocon 2024*
- Venue already booked (Patrick)
- [Zac] - *CQ issue 4 -- submit your requests before 25 Mar 2024* --
zac.dover(a)proton.me
Kind Regards,
Ernesto Puerta
Hi guys,
i am very newbie to ceph-cluster but after multiple attempts, i was able to install ceph-reef cluster on debian-12 by cephadm tool on test environment with 2 mons and 3 OSD's om VM's. All was seeming good and i was exploring more about it so i rebooted cluster and found that now i am not able to access ceph dashboard and i have try to check this
root@ceph-mon-01:/# ceph orch ls
2024-03-01T08:53:05.051+0000 7ff7602b8700 0 monclient(hunting): authenticate timed out after 300
[errno 110] RADOS timed out (error connecting to the cluster)
i have not configured RADOS. And i have no clue about it. Any help would be very appreciated? the same issue.
Hi!
I have been reading some ebooks of Ceph and some doc and learning about
it. The goal of all it, is the fact of creating a rock solid storage por
virtual machines. After all the learning I have not been able to answer
by myself to this question so I was wondering if perhaps you could
clarify my doubt.
Let's imagine three datacenters, each one with for instance, 4
virtualization hosts. As I was planning to build a solution for diferent
hypervisors I have been thinking in the following env.
- I planed to have my Ceph storage (with different pools inside) with
OSDs in three different datacenters (as failure point).
- Each datacenter's hosts, will be accessing to a NFS redundant service
in the own datacenter.
- Each NFS redundant service of each datacenter will be composed by two
NFS gateways accessing to the OSDs of the placement group located in the
own datacenter. I planned achieving this with OSD weights and getting
with that the fact that the crush algorithm to build the map so that
each datacenter accesses end up having as master, the OSD of the own
datacenter in the placement group. Obviously, slave OSD replicas will
exist in the other three datacenters or even I don't discard the fact of
using erasure coding in some manner.
- The NFS gateways could be a NFS redundant gateway service from Ceph (I
have seen now they have developed something for this purpose
https://docs.ceph.com/en/quincy/mgr/nfs/) or perhaps two different
Debian machines, accessing to Ceph with rados and sharing to the
hypervisors that information over NFS. In case of Debian machines I have
heard good results using pacemaker/corosync for providing HA to that NFS
(between 0,5 and 3 seconds for fail over and service up again).
What do you think about this plan?. Do you see it feasible?. We will
work too with KVM and there we could access to Ceph directly but I would
needed to provide too storage por Xen and Vmware.
Thank you so much in advance,
Cheers!
Hi Dongchuan
Could I know which version or which commit that you are building and your environment: system, CPU, kernel?
./do_cmake.sh -DCMAKE_BUILD_TYPE=RelWithDebInfo this command should be OK without QAT.
Thanks
-Hualong
> -----Original Message-----
> From: 张东川 <zhangdongchuan(a)metastonecorp.com>
> Sent: Wednesday, March 6, 2024 9:51 AM
> To: ceph-users <ceph-users(a)ceph.io>
> Subject: [ceph-users] How to build ceph without QAT?
>
> Hi guys,
>
>
> I tried both following commands.
> Neither of them worked.
>
>
> "./do_cmake.sh -DCMAKE_BUILD_TYPE=RelWithDebInfo -DWITH_QAT=OFF
> -DWITH_QATDRV=OFF -DWITH_QATZIP=OFF"
> "ARGS="-DWITH_QAT=OFF -DWITH_QATDRV=OFF -
> DWITH_QATZIP=OFF" ./do_cmake.sh -
> DCMAKE_BUILD_TYPE=RelWithDebInfo"
>
>
> I still see errors like:
> make[1]: *** [Makefile:4762:
> quickassist/lookaside/access_layer/src/sample_code/performance/framew
> ork/linux/user_space/cpa_sample_code-cpa_sample_code_utils.o] Error 1
>
>
>
>
> So what's the proper way to configure build commands?
> Thanks a lot.
>
>
> Best Regards,
> Dongchuan
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an email
> to ceph-users-leave(a)ceph.io
Is there an easy way to poll the ceph cluster buckets in a way to see how much space is remaining? And is it possible to see how much ceph cluster space is remaining overall? I am trying to extract the data from our Ceph cluster and put it into a format that our SolarWinds can understand in whole number integers, so we can monitor bucket allocated space and overall cluster space in the cluster as a whole.
Via Canonical support, the said I can do something like "sudo ceph df -f json-pretty" to pull the information, but what is it I need to look at from the output (see below) to display over to SolarWinds?
{
"stats": {
"total_bytes": 960027263238144,
"total_avail_bytes": 403965214187520,
"total_used_bytes": 556062049050624,
"total_used_raw_bytes": 556062049050624,
"total_used_raw_ratio": 0.57921481132507324,
"num_osds": 48,
"num_per_pool_osds": 48,
"num_per_pool_omap_osds": 48
},
"stats_by_class": {
"ssd": {
"total_bytes": 960027263238144,
"total_avail_bytes": 403965214187520,
"total_used_bytes": 556062049050624,
"total_used_raw_bytes": 556062049050624,
"total_used_raw_ratio": 0.57921481132507324
}
},
And a couple of data pools...
{
"name": "default.rgw.jv-va-pool.data",
"id": 65,
"stats": {
"stored": 4343441915904,
"objects": 17466616,
"kb_used": 12774490932,
"bytes_used": 13081078714368,
"percent_used": 0.053900588303804398,
"max_avail": 76535973281792
}
},
{
"name": "default.rgw.jv-va-pool.index",
"id": 66,
"stats": {
"stored": 42533675008,
"objects": 401,
"kb_used": 124610380,
"bytes_used": 127601028363,
"percent_used": 0.00055542576592415571,
"max_avail": 76535973281792
}
},
This message and its attachments are from Data Dimensions and are intended only for the use of the individual or entity to which it is addressed, and may contain information that is privileged, confidential, and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, or the employee or agent responsible for delivering the message to the intended recipient, you are hereby notified that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify the sender immediately and permanently delete the original email and destroy any copies or printouts of this email as well as any attachments.
Dear Team,
I am facing one issue, which could be a possible bug, but not able to find
any solution.
We are unable to create a bucket on ceph from ceph-dashboard.
Bucket is getting created with a fresh/different name but once I am trying
with a deleted bucket name then it is not getting created.
I have tested on (Octopus and Pacific version).
Steps to reproduce.
a. Create user RGW user from ceph Dashboard. (user1)
b. Create a bucket with the bucket owner as user1. (bucket1)
c. Delete the bucket
d Delete user.
e. Create a user again with the same name (user1)
f. Create a new bucket with the bucket owner as user1. (Bucket2)
I get below error message:
RGW REST API failed request with status code 403
(b'{"Code":"InvalidAccessKeyId","RequestId":"tx00000457ff5169168a9e3-00648afcd0'
b'-fdd1c2-dev","HostId":"fdd1c2-dev-india"}')
Hi guys,
I tried both following commands.
Neither of them worked.
"./do_cmake.sh -DCMAKE_BUILD_TYPE=RelWithDebInfo -DWITH_QAT=OFF -DWITH_QATDRV=OFF -DWITH_QATZIP=OFF"
"ARGS="-DWITH_QAT=OFF -DWITH_QATDRV=OFF -DWITH_QATZIP=OFF" ./do_cmake.sh -DCMAKE_BUILD_TYPE=RelWithDebInfo"
I still see errors like:
make[1]: *** [Makefile:4762: quickassist/lookaside/access_layer/src/sample_code/performance/framework/linux/user_space/cpa_sample_code-cpa_sample_code_utils.o] Error 1
So what's the proper way to configure build commands?
Thanks a lot.
Best Regards,
Dongchuan