Coud it be a problem that I'm running a mix of x86 and arm? I use
vagrant / virtualbox for the mons. Currently I only have two odroid hc-2
devices available.
Am 11.01.2021 um 23:48 schrieb Oliver Weinmann:
> Hi again,
>
> it took me some time but I figured out that on ubuntu focal there is a
> more recent version of ceph (15.2.7) available. So I gave it a try and
> replaced the ceph_argparse.py file but it still stuck running the
> command:
>
> [2021-01-11 23:44:06,340][ceph_volume.process][INFO ] Running
> command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd
> --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
> c3a567be-08e0-4802-8b08-07d6891de485
>
> any more clues?
>
>
> Am 08.01.2021 um 10:03 schrieb kefu chai:
>> Oliver Weinmann <oliver.weinmann(a)me.xn--com>202118-9m2ue46p64yakbe 周五04:30写道:
>>
>>> Ok, I replaced the whole file ceph_argparse.py with the patched one
>>> from
>>> github. Instead of an throwing an error it now seems to be stuck
>>> forever.
>>> Or am I to impatient? I'm running
>>>
>> I don’t think so. In a healthy cluster, the command should complete
>> in no
>> more than 1 second. I just checked the revision history of
>> ceph_argparse.py, there are a bunch of changes since the release of
>> nautilus. My guess is that the version in master might include some bits
>> not compatible with nautilus? So, I’d suggest only cherry-pick the
>> change
>> in that PR, and try again.
>>
>>> debian buster so this is not the latest ceph release octopus, but
>>> nautilus:
>>>
>>> root@odroidxu4:~# dpkg -l ceph
>>> Desired=Unknown/Install/Remove/Purge/Hold
>>> |
>>> Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
>>>
>>> |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
>>> ||/ Name Version Architecture Description
>>>
>>>
+++-==============-=================-============-===================================
>>>
>>> ii ceph 14.2.15-3~bpo10+1 armhf distributed
>>> storage and
>>> file system
>>> Am 07.01.2021 um 13:07 schrieb kefu chai:
>>>
>>>
>>>
>>> Oliver Weinmann <oliver.weinmann(a)me.xn--com>202117-9m2ue46p64yakbe 周四16:32写道:
>>>
>>>> Hi,
>>>>
>>>> thanks for the quick reply. I will test it. Do I have to recompile
>>>> ceph
>>>> in order to test it?
>>>>
>>> No, you just need to apply the change of ceph_argparse.py.
>>>
>>>
>>>> Am 07.01.2021 um 02:13 schrieb kefu chai:
>>>>
>>>>
>>>>
>>>> On Thursday, January 7, 2021, Oliver Weinmann
<oliver.weinmann(a)me.com>
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I have a similar if not the same issue. I run armbian buster on my
>>>>> odroid hc2 which is the same as a xu4 and I get the following
>>>>> error, trying
>>>>> to build a cluster with ceph-ansible:
>>>>
>>>> We have a fix for a similar issue recently. See
>>>>
https://github.com/ceph/ceph/pull/38665. Could you give it a shot? I
>>>> will backport it to LTS branches if it helps.
>>>>
>>>>
>>>>
>>>>> ASK [ceph-osd : use ceph-volume lvm batch to create bluestore osds]
>>>>> ***************************************************
>>>>> Wednesday 06 January 2021 21:46:44 +0000 (0:00:00.073)
>>>>> 0:02:01.697 *****
>>>>> fatal: [192.168.2.123]: FAILED! => changed=true
>>>>> cmd:
>>>>> - ceph-volume
>>>>> - --cluster
>>>>> - ceph
>>>>> - lvm
>>>>> - batch
>>>>> - --bluestore
>>>>> - --yes
>>>>> - /dev/sda
>>>>> delta: '0:00:02.979200'
>>>>> end: '2021-01-06 22:46:48.049074'
>>>>> msg: non-zero return code
>>>>> rc: 1
>>>>> start: '2021-01-06 22:46:45.069874'
>>>>> stderr: |-
>>>>> --> DEPRECATION NOTICE
>>>>> --> You are using the legacy automatic disk sorting behavior
>>>>> --> The Pacific release will change the default to --no-auto
>>>>> --> passed data devices: 1 physical, 0 LVM
>>>>> --> relative data size: 1.0
>>>>> Running command: /usr/bin/ceph-authtool --gen-print-key
>>>>> Running command: /usr/bin/ceph --cluster ceph --name
>>>>> client.bootstrap-osd --keyring
>>>>> /var/lib/ceph/bootstrap-osd/ceph.keyring -i
>>>>> - osd new 8854fc6d-d637-40a9-a1b1-b8e2eeee0afd
>>>>> stderr: Traceback (most recent call last):
>>>>> stderr: File "/usr/bin/ceph", line 1273, in
<module>
>>>>> stderr: retval = main()
>>>>> stderr: File "/usr/bin/ceph", line 982, in main
>>>>> stderr: conffile=conffile)
>>>>> stderr: File
"/usr/lib/python3/dist-packages/ceph_argparse.py",
>>>>> line 1320, in run_in_thread
>>>>> stderr: raise Exception("timed out")
>>>>> stderr: Exception: timed out
>>>>> Traceback (most recent call last):
>>>>> File "/usr/sbin/ceph-volume", line 11, in
<module>
>>>>> load_entry_point('ceph-volume==1.0.0',
'console_scripts',
>>>>> 'ceph-volume')()
>>>>> File
"/usr/lib/python3/dist-packages/ceph_volume/main.py",
>>>>> line
>>>>> 39, in __init__
>>>>> self.main(self.argv)
>>>>> File
>>>>>
"/usr/lib/python3/dist-packages/ceph_volume/decorators.py",
>>>>> line 59, in newfunc
>>>>> return f(*a, **kw)
>>>>> File
"/usr/lib/python3/dist-packages/ceph_volume/main.py",
>>>>> line
>>>>> 150, in main
>>>>> terminal.dispatch(self.mapper, subcommand_args)
>>>>> File
"/usr/lib/python3/dist-packages/ceph_volume/terminal.py",
>>>>> line 194, in dispatch
>>>>> instance.main()
>>>>> File
>>>>>
"/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py",
>>>>> line 42,
>>>>> in main
>>>>> terminal.dispatch(self.mapper, self.argv)
>>>>> File
"/usr/lib/python3/dist-packages/ceph_volume/terminal.py",
>>>>> line 194, in dispatch
>>>>> instance.main()
>>>>> File
>>>>>
"/usr/lib/python3/dist-packages/ceph_volume/decorators.py",
>>>>> line 16, in is_root
>>>>> return func(*a, **kw)
>>>>> File
>>>>>
"/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/batch.py",
>>>>> line
>>>>> 415, in main
>>>>> self._execute(plan)
>>>>> File
>>>>>
"/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/batch.py",
>>>>> line
>>>>> 434, in _execute
>>>>> c.create(argparse.Namespace(**args))
>>>>> File
>>>>>
"/usr/lib/python3/dist-packages/ceph_volume/decorators.py",
>>>>> line 16, in is_root
>>>>> return func(*a, **kw)
>>>>> File
>>>>>
"/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/create.py",
>>>>> line
>>>>> 26, in create
>>>>> prepare_step.safe_prepare(args)
>>>>> File
>>>>>
"/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py",
>>>>> line
>>>>> 252, in safe_prepare
>>>>> self.prepare()
>>>>> File
>>>>>
"/usr/lib/python3/dist-packages/ceph_volume/decorators.py",
>>>>> line 16, in is_root
>>>>> return func(*a, **kw)
>>>>> File
>>>>>
"/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py",
>>>>> line
>>>>> 292, in prepare
>>>>> self.osd_id = prepare_utils.create_id(osd_fsid,
>>>>> json.dumps(secrets), osd_id=self.args.osd_id)
>>>>> File
>>>>>
"/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py",
>>>>> line 173, in create_id
>>>>> raise RuntimeError('Unable to create a new OSD id')
>>>>> RuntimeError: Unable to create a new OSD id
>>>>> stderr_lines: <omitted>
>>>>> stdout: ''
>>>>> stdout_lines: <omitted>
>>>>> fatal: [odroidxu4]: FAILED! => changed=true
>>>>> cmd:
>>>>> - ceph-volume
>>>>> - --cluster
>>>>> - ceph
>>>>> - lvm
>>>>> - batch
>>>>> - --bluestore
>>>>> - --yes
>>>>> - /dev/sda
>>>>> delta: '0:00:03.510973'
>>>>> end: '2021-01-06 22:46:48.514102'
>>>>> msg: non-zero return code
>>>>> rc: 1
>>>>> start: '2021-01-06 22:46:45.003129'
>>>>> stderr: |-
>>>>> --> DEPRECATION NOTICE
>>>>> --> You are using the legacy automatic disk sorting behavior
>>>>> --> The Pacific release will change the default to --no-auto
>>>>> --> passed data devices: 1 physical, 0 LVM
>>>>> --> relative data size: 1.0
>>>>> Running command: /usr/bin/ceph-authtool --gen-print-key
>>>>> Running command: /usr/bin/ceph --cluster ceph --name
>>>>> client.bootstrap-osd --keyring
>>>>> /var/lib/ceph/bootstrap-osd/ceph.keyring -i
>>>>> - osd new 4e292c82-bb4d-4581-aead-46ff635fda69
>>>>> stderr: Traceback (most recent call last):
>>>>> stderr: File "/usr/bin/ceph", line 1273, in
<module>
>>>>> stderr: retval = main()
>>>>> stderr: File "/usr/bin/ceph", line 982, in main
>>>>> stderr: conffile=conffile)
>>>>> stderr: File
"/usr/lib/python3/dist-packages/ceph_argparse.py",
>>>>> line 1320, in run_in_thread
>>>>> stderr: raise Exception("timed out")
>>>>> stderr: Exception: timed out
>>>>> stderr:
>>>>> /build/ceph-Ti7FjJ/ceph-14.2.15/src/common/config.cc: In
>>>>> function 'void md_config_t::set_val_default(ConfigValues&,
const
>>>>> ConfigTracker&, const string&, const string&)' thread
b0e3a460 time
>>>>> 2021-01-06 22:46:48.357354
>>>>> stderr:
>>>>> /build/ceph-Ti7FjJ/ceph-14.2.15/src/common/config.cc: 259:
>>>>> FAILED ceph_assert(o)
>>>>> stderr: ceph version 14.2.15
>>>>> (afdd217ae5fb1ed3f60e16bd62357ca58cc650e5) nautilus (stable)
>>>>> stderr: 1: (ceph::__ceph_assert_fail(char const*, char
>>>>> const*, int,
>>>>> char const*)+0xeb) [0xb18b26a4]
>>>>> stderr: 2: (ceph::__ceph_assert_fail(ceph::assert_data
>>>>> const&)+0xd)
>>>>> [0xb18b2802]
>>>>> stderr: 3: (md_config_t::set_val_default(ConfigValues&,
>>>>> ConfigTracker const&, std::__cxx11::basic_string<char,
>>>>> std::char_traits<char>, std::allocator<char> >
const&,
>>>>> std::__cxx11::basic_string<char, std::char_traits<char>,
>>>>> std::allocator<char> > const&)+0x69) [0xb195be0a]
>>>>> stderr: 4: (md_config_t::md_config_t(ConfigValues&,
>>>>> ConfigTracker
>>>>> const&, bool)+0x15d31) [0xb1972ac6]
>>>>> stderr: 5: (CephContext::CephContext(unsigned int,
>>>>> code_environment_t, int)+0x10ef) [0xb193d090]
>>>>> stderr: 6: (common_preinit(CephInitParameters const&,
>>>>> code_environment_t, int)+0x7d) [0xb1956af6]
>>>>> stderr: 7: (()+0x2046a) [0xb639546a]
>>>>> stderr: 8: (rados_create2()+0x55) [0xb639589e]
>>>>> Traceback (most recent call last):
>>>>> File "/usr/sbin/ceph-volume", line 11, in
<module>
>>>>> load_entry_point('ceph-volume==1.0.0',
'console_scripts',
>>>>> 'ceph-volume')()
>>>>> File
"/usr/lib/python3/dist-packages/ceph_volume/main.py",
>>>>> line
>>>>> 39, in __init__
>>>>> self.main(self.argv)
>>>>> File
>>>>>
"/usr/lib/python3/dist-packages/ceph_volume/decorators.py",
>>>>> line 59, in newfunc
>>>>> return f(*a, **kw)
>>>>> File
"/usr/lib/python3/dist-packages/ceph_volume/main.py",
>>>>> line
>>>>> 150, in main
>>>>> terminal.dispatch(self.mapper, subcommand_args)
>>>>> File
"/usr/lib/python3/dist-packages/ceph_volume/terminal.py",
>>>>> line 194, in dispatch
>>>>> instance.main()
>>>>> File
>>>>>
"/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/main.py",
>>>>> line 42,
>>>>> in main
>>>>> terminal.dispatch(self.mapper, self.argv)
>>>>> File
"/usr/lib/python3/dist-packages/ceph_volume/terminal.py",
>>>>> line 194, in dispatch
>>>>> instance.main()
>>>>> File
>>>>>
"/usr/lib/python3/dist-packages/ceph_volume/decorators.py",
>>>>> line 16, in is_root
>>>>> return func(*a, **kw)
>>>>> File
>>>>>
"/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/batch.py",
>>>>> line
>>>>> 415, in main
>>>>> self._execute(plan)
>>>>> File
>>>>>
"/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/batch.py",
>>>>> line
>>>>> 434, in _execute
>>>>> c.create(argparse.Namespace(**args))
>>>>> File
>>>>>
"/usr/lib/python3/dist-packages/ceph_volume/decorators.py",
>>>>> line 16, in is_root
>>>>> return func(*a, **kw)
>>>>> File
>>>>>
"/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/create.py",
>>>>> line
>>>>> 26, in create
>>>>> prepare_step.safe_prepare(args)
>>>>> File
>>>>>
"/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py",
>>>>> line
>>>>> 252, in safe_prepare
>>>>> self.prepare()
>>>>> File
>>>>>
"/usr/lib/python3/dist-packages/ceph_volume/decorators.py",
>>>>> line 16, in is_root
>>>>> return func(*a, **kw)
>>>>> File
>>>>>
"/usr/lib/python3/dist-packages/ceph_volume/devices/lvm/prepare.py",
>>>>> line
>>>>> 292, in prepare
>>>>> self.osd_id = prepare_utils.create_id(osd_fsid,
>>>>> json.dumps(secrets), osd_id=self.args.osd_id)
>>>>> File
>>>>>
"/usr/lib/python3/dist-packages/ceph_volume/util/prepare.py",
>>>>> line 173, in create_id
>>>>> raise RuntimeError('Unable to create a new OSD id')
>>>>> RuntimeError: Unable to create a new OSD id
>>>>> stderr_lines: <omitted>
>>>>> stdout: ''
>>>>> stdout_lines: <omitted>
>>>>>
>>>>> Trying to run the failed command on one of the odroid nodes:
>>>>>
>>>>> root@odroidxu4:~# /usr/bin/ceph --cluster ceph --name
>>>>> client.bootstrap-osd --keyring
>>>>> /var/lib/ceph/bootstrap-osd/ceph.keyring -i
>>>>> - osd new 8854fc6d-d637-40a9-a1b1-b8e2eeee0afd
>>>>> Traceback (most recent call last):
>>>>> File "/usr/bin/ceph", line 1273, in <module>
>>>>> retval = main()
>>>>> File "/usr/bin/ceph", line 982, in main
>>>>> conffile=conffile)
>>>>> File "/usr/lib/python3/dist-packages/ceph_argparse.py",
line
>>>>> 1320, in
>>>>> run_in_thread
>>>>> raise Exception("timed out")
>>>>> Exception: timed out
>>>>>
>>>>> Any clues?
>>>>>
>>>>> Best Regards,
>>>>>
>>>>> Oliver
>>>>> _______________________________________________
>>>>> ceph-users mailing list -- ceph-users(a)ceph.io
>>>>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
>>>>>
>>>>
>>>> --
>>>> Regards
>>>> Kefu Chai
>>>>
>>>> --
>>> Regards
>>> Kefu Chai
>>>
>>> --
>> Regards
>> Kefu Chai
>> _______________________________________________
>> ceph-users mailing list -- ceph-users(a)ceph.io
>> To unsubscribe send an email to ceph-users-leave(a)ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io