Hi- I recently upgraded to pacific, and I am now getting an error connecting on my windows 10 machine:
The error is the handle_auth_bad_method, I tried a few combinations of cephx,none on the monitors, but I keep getting the same error.
The same config(With paths updated) and key ring works on my WSL instance running an old luminous client (I can't seem to get it to install a newer client )
Do you have any suggestions on where to look?
Thanks,
Rob.
-----------------------------------------------------
PS C:\Program Files\Ceph\bin> .\ceph-dokan.exe --id rob -l Q
2021-05-14T12:19:58.172Eastern Daylight Time 5 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2]
failed to fetch mon config (--no-mon-config to skip)
PS C:\Program Files\Ceph\bin> cat c:/ProgramData/ceph/ceph.client.rob.keyring
[client.rob]
key = <REDACTED>
caps mon = "allow rwx"
caps osd = "allow rwx"
PS C:\Program Files\Ceph\bin> cat C:\ProgramData\Ceph\ceph.conf
# minimal ceph.conf
[global]
log to stderr = true
; Uncomment the following in order to use the Windows Event Log
log to syslog = true
run dir = C:/ProgramData/ceph/out
crash dir = C:/ProgramData/ceph/out
; Use the following to change the cephfs client log level
debug client = 2
[global]
fsid = <redacted>
mon_host = [<redacted>]
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
[client]
keyring = c:/ProgramData/ceph/ceph.client.rob.keyring
log file = C:/ProgramData/ceph/out/$name.$pid.log
admin socket = C:/ProgramData/ceph/out/$name.$pid.asok
Can someone point me to some good doc's describing the dangers of using a large amount of disks in a raid5/raid6? (Understandable for less techy people)
Hi everyone,
I'm currently trying to setup the OSDs in a fresh cluster using cephadm.
The underlying devices are NVMe and I'm trying to provision 2 OSDs per device
with the following spec:
```
service_type: osd
service_id: all-nvmes
unmanaged: true
placement:
label: osd
data_devices:
all: true
osds_per_device: 2
```
Since `ceph orch apply` just creates the service but not the daemons when `unmanaged: True`.
Is there a way to enfore the `osds_per_device` setting when using `ceph orch daemon add osd`?
Thanks in advance
Hello everyone,
Given that BlueStore has been the default and more widely used
objectstore since quite some time, we would like to understand whether
we can consider deprecating FileStore in our next release, Quincy and
remove it in the R release. There is also a proposal [0] to add a
health warning to report FileStore OSDs.
We discussed this topic in the Ceph Month session today [1] and there
were no objections from anybody on the call. I wanted to reach out to
the list to check if there are any concerns about this or any users
who will be impacted by this decision.
Thanks,
Neha
[0] https://github.com/ceph/ceph/pull/39440
[1] https://pad.ceph.com/p/ceph-month-june-2021
Today I visited Ceph's official site and found that links to
`resources` page seemed to be missing.
https://ceph.io/en/
In addition, this page is no longer exists.
https://ceph.io/resources/
Could you tell me where did they are moved?
Thanks,
Satoru
Hello.
Im looking for proper way for the nic bonding for ceph.
I was using the bonding driver with default settings
ad_select=stable(default) and hash algorithm layer2.
With this settings osds are using only one port and lacp only usefull for
different nodes because layer2 hash algorithm using Mac.
I've changed ad_select to bandwitdh and both nic is in use now but layer2
hash prevents dual nic usage for between two nodes (because layer2 using
only Mac ).
People advice using layer2+3 for best performance but it has no effect on
osds because mac and ip is the same.
I've tried layer3+4 to split by ports instead mac and it works. But i dont
know what will the effect and also my switch is layer2.
With "iperf -Parallel 2" now i can reach 19gbit on 2x 10gbit nics.
I think there is no way to use both nic without parallel Usage with
different ports. If I use same port and too many process I can only use one
port at a time.
What settings are you using?
What is the best for ceph?
Hello,
I am setting up user quotas and I would like to enable the check on raw
setting for my user's quota. I can't find any documentation on how to
change this setting in any of the ceph documents. Do any of you know how to
change this setting? Possibly using radosgw-admin?
Thanks in advance!
Jared Jacob
Dear All,
I have deployed the latest CEPH Pacific release in my lab and started to check out the new ?stable? NFS Ganesha features. First of all I'm a bit confused which method to actually use to deploy the NFS cluster:
cephadm or ceph nfs cluster create?
I used "nfs cluster create" for now and noticed a minor problem in the docs.
https://docs.ceph.com/en/latest/cephfs/fs-nfs-exports/#cephfs-nfs
The command is stated as:
$ ceph nfs cluster create <clusterid> [<placement>] [--ingress --virtual-ip <ip>]
while as it needs a type (cephfs) to be specified
nfs cluster create <type> <clusterid> [<placement>] : Create an NFS Cluster
Also I can't manage to use the --ingress --virtual-ip parameter. Every time I try to use it I get this:
[root@cephboot~]# ceph nfs cluster create cephfs ec9e031a-cd10-11eb-a3c3-005056b7db1f --ingress --virtual-ip 192.168.9.199
Invalid command: Unexpected argument '--ingress'
nfs cluster create <type> <clusterid> [<placement>] : Create an NFS Cluster
Error EINVAL: invalid command
So i just deployed a NFS cluster without a VIP. Maybe I'm missing something?
What about this note in the docs:
>> From Pacific, the nfs mgr module must be enabled prior to use. <<
I can't find any info on how to enable it. Maybe this is already the case?
ceph nfs cluster create cephfs ec9e031a-cd10-11eb-a3c3-005056b7db1f "cephnode01"
This seems to be working fine. I managed to connect a CentOS 7 VM and I can access the NFS export just fine. Great stuff.
For testing I tried to attach the same NFS export to a standalone ESXi 6.5 Server. This also works, but its diskspace is shown as 0 bytes:
I'm not sure if this supported or I'm missing something. I could not find any clear info in the docs only some reddit posts where users mentioned that they were able to use it with VMware.
Thanks and Best Regards,
Oliver
Hello Folks,
We are running Ceph Octopus 15.2.13 release and would like to use the disk
prediction module. So far issues we faced are:
1. Ceph documentation does not mention to install
`ceph-mgr-diskprediction-local.noarch`
2. Even if I install the needed package, after mgr restart, it does not
appear on Ceph cluster. Detailed log is here:
gist:b687798ea97ef13e36d466f2d7b1470a
<https://gist.github.com/juztas/b687798ea97ef13e36d466f2d7b1470a> . Ceph -s
shows [1].
Are you aware of this issue and are there any workarounds?
Thanks!
[1]
# ceph -s
cluster:
id: 12d9d70a-e993-464c-a6f8-4f674db35136
health: HEALTH_WARN
no active mgr
services:
mon: 3 daemons, quorum ceph-mon-cms-1,ceph-mon-cms-2,ceph-mon-cms-3
(age 2d)
mgr: no daemons active (since 11m)
mds: cephfs:1 {0=ceph-mds-cms-1=up:active} 1 up:standby
ceph health detail
HEALTH_WARN no active mgr
[WRN] MGR_DOWN: no active mgr
> but our experience so
> far has been a big improvement over the complexity of managing package
> dependencies across even just a handful of distros
Do you have some charts or docs that show this complexity problem, because I have problems understanding it.
This is very likely due to that my understanding of ceph internals is limited. For instance my view of the osd daemon. Now working with logical volumes for writing/reading data and then you have osd<->osd,mon,mgr communication. What dependency hell is there to be expected?
> (Podman has been
> the only real culprit here, tbh, but I give them a partial pass as the
> tool is relatively new.)
Is it not better for the sake of stability, security and future support to choose something with a proven record?