Hi all,
I just had created a ceph cluster to use cephfs. When i create the a ceph
fs pool i get the filesystem below error.
# ceph osd pool create cephfs_data 128
pool 'cephfs_data' created
# ceph osd pool create cephfs_metadata 128
pool 'cephfs_metadata' created
# ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 6 and data pool 5
# ceph -s
cluster:
id: 1c27def45-f0f9-494d-sfke-eb4323432fd
health: HEALTH_ERR
1 filesystem is offline
1 filesystem is online with fewer MDS than max_mds
services:
mon: 2 daemons, quorum ceph-mon01,ceph-mon02
mgr: ceph-adm01(active)
mds: cephfs-0/0/1 up
osd: 12 osds: 12 up, 12 in
data:
pools: 2 pools, 256 pgs
objects: 0 objects, 0 B
usage: 12 GiB used, 588 GiB / 600 GiB avail
pgs: 256 active+clean
but when i check the max_mds for the ceph fs it says 1
# ceph fs get cephfs | grep max_mds
max_mds 1
Let anyone know what am i missing here? Any inputs is much appreciated.
Regards,
Ram
Ceph-explorer..
Hi all,
We have a few subdirs with an rctime in the future.
# getfattr -n ceph.dir.rctime session
# file: session
ceph.dir.rctime="2576387188.090"
I can't find any subdir or item in that directory with that rctime, so
I presume that there was previously a file and that rctime cannot go
backwards [1]
Is there any way to fix these rctimes so they show the latest ctime of
the subtree?
Also -- are we still relying on the client clock to set the rctime /
ctime of a file? Would it make sense to limit ctime/rctime for any
update to the current time on the MDS ?
Best Regards,
Dan
[1] https://github.com/ceph/ceph/pull/24023/commits/920ef964311a61fcc6c0d6671b7…
Hi,
We see that we have 5 'remapped' PGs, but are unclear why/what to do about
it. We shifted some target ratios for the autobalancer and it resulted in
this state. When adjusting ratio, we noticed two OSDs go down, but we just
restarted the container for those OSDs with podman, and they came back up.
Here's status output:
###################
root@ceph01:~# ceph status
INFO:cephadm:Inferring fsid x
INFO:cephadm:Inferring config x
INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15
cluster:
id: 41bb9256-c3bf-11ea-85b9-9e07b0435492
health: HEALTH_OK
services:
mon: 5 daemons, quorum ceph01,ceph04,ceph02,ceph03,ceph05 (age 2w)
mgr: ceph03.ytkuyr(active, since 2w), standbys: ceph01.aqkgbl,
ceph02.gcglcg, ceph04.smbdew, ceph05.yropto
osd: 168 osds: 168 up (since 2d), 168 in (since 2d); 5 remapped pgs
data:
pools: 3 pools, 1057 pgs
objects: 18.00M objects, 69 TiB
usage: 119 TiB used, 2.0 PiB / 2.1 PiB avail
pgs: 1056 active+clean
1 active+clean+scrubbing+deep
io:
client: 859 KiB/s rd, 212 MiB/s wr, 644 op/s rd, 391 op/s wr
root@ceph01:~#
###################
When I look at ceph pg dump, I don't see any marked as remapped:
###################
root@ceph01:~# ceph pg dump |grep remapped
INFO:cephadm:Inferring fsid x
INFO:cephadm:Inferring config x
INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15
dumped all
root@ceph01:~#
###################
Any idea what might be going on/how to recover? All OSDs are up. Health is
'OK'. This is Ceph 15.2.4 deployed using Cephadm in containers, on Podman
2.0.3.
Hi,
I have a CEPH 15.2.4 running in a docker. How to configure for use a
specific data pool? i try put the follow line in the ceph.conf but the
changes not working. .
[client.myclient]
rbd default data pool = Mydatapool
I need it to configure for erasure pool with cloudstack
Can help me? , where is the ceph conf we i need configure?
Thanks.
--
Untitled Document
Hi
Thanks for the reply.
cephadm runs ceph containers automatically. How to set privileged mode
in ceph container?
--
> El 23/9/20 a las 13:24, Daniel Gryniewicz escribió:
>> NFSv3 needs privileges to connect to the portmapper. Try running
>> your docker container in privileged mode, and see if that helps.
>>
>> Daniel
>>
>> On 9/23/20 11:42 AM, Gabriel Medve wrote:
>>> Hi,
>>>
>>> I have a CEPH 15.2.5 running in a docker , i configure nfs ganesha
>>> with nfs version 3 but i can not mount it.
>>> If configure ganesha with nfs version 4 i can mounted without
>>> problems but i need the version 3 .
>>>
>>> The error is mount.nfs: Protocol not supported
>>>
>>> Can help me?
>>>
>>> Thanks.
>>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users(a)ceph.io
>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
> --
> Untitled Document
Is it possible to disable checking on 'x pool(s) have no replicas
configured', so I don't have this HEALTH_WARN constantly.
Or is there some other disadvantage of keeping some empty 1x replication
test pools?