NaadiaMirza &CO was born on the day she realised that creativity was a gift given to her family and business acumen inherited by her father. Thus the company and each project always has a part of him in it . Naadia claims she was lucky to inherit passion and zest which is most important when putting together a big day. The idea is to create spaces that are personal and uniquely yet magical.Today, the team of designers work with local artisans and creative talent to celebrate your special day.
<a href="https://www.naadiamirza.com/best-wedding-planner-in-bangalore/">Top wedding decorators in bangalore</a>
law courses in bangalore:The ABBS School of Law has started from the academic year 2018 – 19 under the aegis of Samagra Sikshana Samithi Trust. The Trust started the institution with a cherished ambition to impart legal education which is a very important step towards spreading legal awareness and empowering the helpless in the society. The College is affiliated to Karnataka State Law University, Hubballi and recognized by Bar Council of India, New Delhi. The college has a Governing Council duly constituted by Samagra Sikshana Samithi Trust. All the members of the Governing Council are accomplished academicians, legal luminaries, management experts and persons of eminence in public service. Governing Council meets regularly two times a year to review academic and administrative work.
https://abbslaw.edu.in/
Running 14.2.4 (but same issue observed on 14.2.2) we have a problem with,
thankfully a testing cluster, where all pgs are failing to peer and are
stuck in peering or unknown stale etc states.
My working theory is that this is because the OSDs dont seem to be
utilizing msgr v2 as "ceph osd find osd.NN" only lists the v1 in the
addrvec. This is in contrast to our working 14.2.4 clusters where both v1
and v2 are listed.
Our monitors via `ceph mon dump` show each mon running on v1 and v2 on the
default ports (3300/6789) and I able to reach each of those ports on all
the mons from a few test OSD nodes.
OSD logs are filled with heartbeat_check: no reply from <IP> <OSD.XY> ever
on either front or back
I have attempted to modify the ceph.conf mon_host on the OSDs to use either
the standard comma separate ip list and the new bracketed format and then
restarting OSD daemons on a number of OSDs but it doesnt seem to impact the
addrvec.
My desire is to get the OSDs working on V2 and see if they are able to
begin peering. How can I force the addrvec to update? Thanks.
Respectfully,
*Wes Dillingham*
wes(a)wesdillingham.com
Hello,
Some time ago I deployed a ceph cluster.
It works great.
Today I collect some statistics and found that the BlueFs utility is not working.
# ceph-bluestore-tool bluefs-bdev-sizes --path /var/lib/ceph/osd/ceph-8
inferring bluefs devices from bluestore path
slot 1 /var/lib/ceph/osd/ceph-8/block -> /dev/dm-5
unable to open /var/lib/ceph/osd/ceph-8/block: (11) Resource temporarily unavailable
2019-11-11 15:03:30.665 7f4b9a427f00 -1 bdev(0x55d5b0310a80 /var/lib/ceph/osd/ceph-8/block) _lock flock failed on /var/lib/ceph/osd/ceph-8/block
2019-11-11 15:03:30.665 7f4b9a427f00 -1 bdev(0x55d5b0310a80 /var/lib/ceph/osd/ceph-8/block) open failed to lock /var/lib/ceph/osd/ceph-8/block: (11) Resource temporarily unavailable
As far as I understand, blocks.db and blocks.wal are missing. I don’t know how it turned out.
# ls -l /var/lib/ceph/osd/ceph-8
total 28
lrwxrwxrwx 1 ceph ceph 93 Nov 9 15:56 block -> /dev/ceph-55b8a53d-1740-402a-b6f4-09d4befdd564/osd-block-c5488db7-621a-490a-88a0-904c12e8b8ed
-rw------- 1 ceph ceph 37 Nov 9 15:56 ceph_fsid
-rw------- 1 ceph ceph 37 Nov 9 15:56 fsid
-rw------- 1 ceph ceph 55 Nov 9 15:56 keyring
-rw------- 1 ceph ceph 6 Nov 9 15:56 ready
-rw-r--r-- 1 ceph ceph 3 Nov 9 15:56 require_osd_release
-rw------- 1 ceph ceph 10 Nov 9 15:56 type
-rw------- 1 ceph ceph 2 Nov 9 15:56 whoami
Did as standard:
....
ceph-deploy osd create test-host1:/dev/sde
ceph-deploy osd create test-host2:/dev/sde
....
What now to do and whether it is necessary to repair it at all?
Assembled on luminous, yesterday updated to Nautilus.
Hi all,
I also hit the bug #24866 in my test environment. According to the logs, the last_clean_epoch in the specified OSD/PG is 17703, but the interval starts with 17895. So the OSD fails to start. There are some other OSDs in the same status.
2019-10-14 18:22:51.908 7f0a275f1700 -1 osd.21 pg_epoch: 18432 pg[18.51( v 18388'4 lc 18386'3 (0'0,18388'4] local-lis/les=18430/18431 n=1 ec=295/295 lis/c 18430/17702 les/c/f 18431/17703/0 18428/18430/18421) [11,21]/[11,21,20] r=1 lpr=18431 pi=[17895,18430)/3 crt=18388'4 lcod 0'0 unknown m=1 mbc={}] 18.51 past_intervals [17895,18430) start interval does not contain the required bound [17703,18430) start
The cause is pg 18.51 went clean in 17703 but 17895 is reported to the monitor.
I am using the last stable version of Mimic (13.2.6).
Any idea how to fix it? Is there any way to bypass this check or fix the reported epoch #?
Thanks in advance.
Best regards,
Huseyin Cotuk
hcotuk(a)gmail.com
The documentation states:
https://docs.ceph.com/docs/mimic/rados/operations/monitoring/
The POOLS section of the output provides a list of pools and the notional usage of each pool. The output from this section DOES NOT reflect replicas, clones or snapshots. For example, if you store an object with 1MB of data, the notional usage will be 1MB, but the actual usage may be 2MB or more depending on the number of replicas, clones and snapshots.
However in our case we are clearly seeing the USAGE field multiplying the total object sizes to the number of replicas.
[root@blackmirror ~]# ceph df
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 80 TiB 34 TiB 46 TiB 46 TiB 58.10
TOTAL 80 TiB 34 TiB 46 TiB 46 TiB 58.10
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
one 2 15 TiB 4.05M 46 TiB 68.32 7.2 TiB
bench 5 250 MiB 67 250 MiB 0 22 TiB
[root@blackmirror ~]# rbd du -p one
NAME PROVISIONED USED
...
<TOTAL> 20 TiB 15 TiB
This is causing several apps (including ceph dashboard) to display inaccurate percentages, because they calculate the total pool capacity as USED + MAX AVAIL, which in this case yields 53.2TB, which is way off. 7.2TB is about 13% of that, so we receive alarms and this is bugging us for quite some time now.
We are running the Mimic version of Ceph (13.2.6) and I would like to know
a proper way of replacing a defective OSD disk that has its DB and WAL on a
separate SSD drive which is shared with 9 other OSDs. More specifically,
the failing disk for osd.327 is on /dev/sdai and its wal/db are on
/dev/sdc, which is partitioned into 10 LVs, holding wal/db for osd.320-329.
When I deployed it, I used pv/vg/lvcreate commands to make VG named ssd1,
LV named db320, db321 and so on. Then I used the ceph-deploy command from
an admin node (`ceph-deploy osd create --block-db=ssd1/db327
--data=dev/sdai <node>`). My main question is what to do about the
separate wal/db data as this page (
https://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds/) does not
seem to address the issue.
1) Do I need to erase the wal/db data on the ssd1/db327 Logical Volume? If
so, how should I do that?
2) Assuming 1) is taken care of (and the "old" OSD is destroyed and the
"bad" hard drive has been physically replaced with a new one), does this
command look correct? `ceph-volume lvm create --osd-id 327 --bluestore
--data /dev/sdai --block.db ssd1/db327`
*Mami Hayashida*
*Research Computing Associate*
Univ. of Kentucky ITS Research Computing Infrastructure