Hey all!
rbd export images/ec8a7ff8-6609-4b7d-8bdd-fadcf3b7973e /root/foo.img
DOES NOT produce the target file
no matter if I use --pool --image format or the one above the target
file is not there.
Progress bar shows up and prints percentage. It ends up with exit 0
[root@controller-0 mnt]# rbd export --pool=images db8290c3-93fd-4a4e-
ad71-7c131070ad6f /mnt/cirros2.img
Exporting image: 100% complete...done.
[root@controller-0 mnt]# echo $?
0
Any idea what's going on here?
It's ceph version 12.2.10 (177915764b752804194937482a39e95e0ca3de94)
luminous (stable)
Any hints will be much appreciated
best regards
Piotr
Hi,
On a daily basis, one of my monitors goes down
[root@cube ~]# ceph health detail
HEALTH_WARN 1 failed cephadm daemon(s); 1/3 mons down, quorum rhel1.robeckert.us,story
[WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
daemon mon.cube on cube.robeckert.us is in error state
[WRN] MON_DOWN: 1/3 mons down, quorum rhel1.robeckert.us,story
mon.cube (rank 2) addr [v2:192.168.2.142:3300/0,v1:192.168.2.142:6789/0] is down (out of quorum)
[root@cube ~]# ceph --version
ceph version 15.2.11 (e3523634d9c2227df9af89a4eac33d16738c49cb) octopus (stable)
I have a script that will copy the mon data from another server and it restarts and runs well for a while.
It is always the same monitor, and when I look at the logs the only thing I really see is the cephadm log showing it down
2021-04-28 10:07:26,173 DEBUG Running command: /usr/bin/podman --version
2021-04-28 10:07:26,217 DEBUG /usr/bin/podman: stdout podman version 2.2.1
2021-04-28 10:07:26,222 DEBUG Running command: /usr/bin/podman inspect --format {{.Id}},{{.Config.Image}},{{.Image}},{{.Created}},{{index .Config.Labels "io.ceph.version"}} ceph-fe3a7cb0-69ca-11eb-8d45-c86000d08867-osd.2
2021-04-28 10:07:26,326 DEBUG /usr/bin/podman: stdout fab17e5242eb4875e266df19ca89b596a2f2b1d470273a99ff71da2ae81eeb3c,docker.io/ceph/ceph:v15,5b724076c58f97872fc2f7701e8405ec809047d71528f79da452188daf2af72e,2021-04-26 17:13:15.54183375 -0400 EDT,
2021-04-28 10:07:26,328 DEBUG Running command: systemctl is-enabled ceph-fe3a7cb0-69ca-11eb-8d45-c86000d08867(a)mon.cube<mailto:ceph-fe3a7cb0-69ca-11eb-8d45-c86000d08867@mon.cube>
2021-04-28 10:07:26,334 DEBUG systemctl: stdout enabled
2021-04-28 10:07:26,335 DEBUG Running command: systemctl is-active ceph-fe3a7cb0-69ca-11eb-8d45-c86000d08867(a)mon.cube<mailto:ceph-fe3a7cb0-69ca-11eb-8d45-c86000d08867@mon.cube>
2021-04-28 10:07:26,340 DEBUG systemctl: stdout failed
2021-04-28 10:07:26,340 DEBUG Running command: /usr/bin/podman --version
2021-04-28 10:07:26,395 DEBUG /usr/bin/podman: stdout podman version 2.2.1
2021-04-28 10:07:26,402 DEBUG Running command: /usr/bin/podman inspect --format {{.Id}},{{.Config.Image}},{{.Image}},{{.Created}},{{index .Config.Labels "io.ceph.version"}} ceph-fe3a7cb0-69ca-11eb-8d45-c86000d08867-mon.cube
2021-04-28 10:07:26,526 DEBUG /usr/bin/podman: stdout 04e7c673cbacf5160427b0c3eb2f0948b2f15d02c58bd1d9dd14f975a84cfc6f,docker.io/ceph/ceph:v15,5b724076c58f97872fc2f7701e8405ec809047d71528f79da452188daf2af72e,2021-04-28 08:54:57.614847512 -0400 EDT,
I don't know if it matters, but this server is an AMD 3600XT while my other two servers which have had no issues are intel based.
The root file system was originally on a SSD, and I switched to NVME, so I eliminated controller or drive issues. (I didn't see anything in dmesg anyway)
If someone could point me in the right direction on where to troubleshoot next, I would appreciate it.
Thanks,
Rob Eckert
Hello,
I cannot flatten an image, it always restarts with:
root@sm-node1.in.illusion.hu:~# rbd flatten vm-hdd/vm-104-disk-1
Image flatten: 28% complete...2021-04-29 10:50:27.373 7ff7caffd700 -1
librbd::operation::FlattenRequest: 0x7ff7c4009db0 should_complete:
encountered error: (85) Interrupted system call should be restarted
Image flatten: 26% complete...2021-04-29 10:50:33.053 7ff7caffd700 -1
librbd::operation::FlattenRequest: 0x7ff7c4008fc0 should_complete:
encountered error: (85) Interrupted system call should be restarted
Image flatten: 0% complete...2021-04-29 10:50:34.829 7ff7caffd700 -1
librbd::operation::FlattenRequest: 0x7ff7c445b470 should_complete:
encountered error: (85) Interrupted system call should be restarted
Image flatten: 39% complete...2021-04-29 10:50:42.081 7ff7caffd700 -1
librbd::operation::FlattenRequest: 0x7ff7c40324e0 should_complete:
encountered error: (85) Interrupted system call should be restarted
Image flatten: 0% complete...2021-04-29 10:50:43.897 7ff7caffd700 -1
librbd::operation::FlattenRequest: 0x7ff7c4018890 should_complete:
encountered error: (85) Interrupted system call should be restarted
Image flatten: 42% complete...2021-04-29 10:51:07.813 7ff7caffd700 -1
librbd::operation::FlattenRequest: 0x7ff7c402fe80 should_complete:
encountered error: (85) Interrupted system call should be restarted
Image flatten: 42% complete...2021-04-29 10:51:29.372 7ff7caffd700 -1
librbd::operation::FlattenRequest: 0x7ff7c40017c0 should_complete:
encountered error: (85) Interrupted system call should be restarted
root@sm-node1.in.illusion.hu:~# uname -a
Linux sm-node1.in.illusion.hu 5.4.106-1-pve #1 SMP PVE 5.4.106-1 (Fri,
19 Mar 2021 11:08:47 +0100) x86_64 GNU/Linux
root@sm-node1.in.illusion.hu:~# dpkg -l |grep ceph
ii ceph 14.2.20-pve1 amd64
distributed storage and file system
ii ceph-base 14.2.20-pve1 amd64
common ceph daemon libraries and management tools
ii ceph-common 14.2.20-pve1 amd64
common utilities to mount and interact with a ceph storage cluster
ii ceph-fuse 14.2.20-pve1 amd64
FUSE-based client for the Ceph distributed file system
ii ceph-mds 14.2.20-pve1 amd64
metadata server for the ceph distributed file system
ii ceph-mgr 14.2.20-pve1 amd64
manager for the ceph distributed storage system
ii ceph-mgr-dashboard 14.2.20-pve1
all dashboard plugin for ceph-mgr
ii ceph-mon 14.2.20-pve1 amd64
monitor server for the ceph storage system
ii ceph-osd 14.2.20-pve1 amd64 OSD
server for the ceph storage system
ii libcephfs2 14.2.20-pve1 amd64 Ceph
distributed file system client library
ii python-ceph-argparse 14.2.20-pve1
all Python 2 utility libraries for Ceph CLI
ii python-cephfs 14.2.20-pve1 amd64
Python 2 libraries for the Ceph libcephfs library
root@sm-node1.in.illusion.hu:~# dpkg -l |grep rbd
ii librbd1 14.2.20-pve1 amd64 RADOS
block device client library
ii python-rbd 14.2.20-pve1 amd64
Python 2 libraries for the Ceph librbd library
root@sm-node1.in.illusion.hu:~# modinfo rbd
filename: /lib/modules/5.4.106-1-pve/kernel/drivers/block/rbd.ko
license: GPL
description: RADOS Block Device (RBD) driver
author: Jeff Garzik <jeff(a)garzik.org>
author: Yehuda Sadeh <yehuda(a)hq.newdream.net>
author: Sage Weil <sage(a)newdream.net>
author: Alex Elder <elder(a)inktank.com>
srcversion: 7BA6FEE20249E416B2D09AB
depends: libceph
retpoline: Y
intree: Y
name: rbd
vermagic: 5.4.106-1-pve SMP mod_unload modversions
parm: single_major:Use a single major number for all rbd
devices (default: true) (bool)
Please give me advice, how can I produce more information to catch this.
Thank you,
i.
Hello everyone,
I am running ceph version 15.2.8 on Ubuntu servers. I am using bluestore osds with data on hdd and db and wal on ssd drives. Each ssd has been partitioned such that it holds 5 dbs and 5 wals. The ssd were were prepared a while back probably when I was running ceph 13.x. I have been gradually adding new osd drives as needed. Recently, I've tried to add more osds, which have failed to my surprise. Previously I've had no issues adding the drives. However, it seems that I can no longer do that with version 15.2.x
Here is what I get:
root@arh-ibstorage4-ib /home/andrei ceph-volume lvm prepare --bluestore --data /dev/sds --block.db /dev/ssd3/db5 --block.wal /dev/ssd3/wal5
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 6aeef34b-0724-4d20-a10b-197cab23e24d
Running command: /usr/sbin/vgcreate --force --yes ceph-1c7cef26-327a-4785-96b3-dcb1b97e8e2f /dev/sds
stderr: WARNING: PV /dev/sdp in VG ceph-bc7587b5-0112-4097-8c9f-4442e8ea5645 is using an old PV header, modify the VG to update.
stderr: WARNING: PV /dev/sdo in VG ceph-33eda27c-53ed-493e-87a8-39e1862da809 is using an old PV header, modify the VG to update.
stderr: WARNING: PV /dev/sdn in VG ssd2 is using an old PV header, modify the VG to update.
stderr: WARNING: PV /dev/sdm in VG ssd1 is using an old PV header, modify the VG to update.
stderr: WARNING: PV /dev/sdj in VG ceph-9d8da00c-f6b9-473f-b499-fa60d74b46c5 is using an old PV header, modify the VG to update.
stderr: WARNING: PV /dev/sdi in VG ceph-1603149e-1e50-4b86-a360-1372f4243603 is using an old PV header, modify the VG to update.
stderr: WARNING: PV /dev/sdh in VG ceph-a5f4416c-8e69-4a66-a884-1d1229785acb is using an old PV header, modify the VG to update.
stderr: WARNING: PV /dev/sde in VG ceph-aac71121-e308-4e25-ae95-ca51bca7aaff is using an old PV header, modify the VG to update.
stderr: WARNING: PV /dev/sdd in VG ceph-1e216580-c01b-42c5-a10f-293674a55c4c is using an old PV header, modify the VG to update.
stderr: WARNING: PV /dev/sdc in VG ceph-630f7716-3d05-41bb-92c9-25402e9bb264 is using an old PV header, modify the VG to update.
stderr: WARNING: PV /dev/sdb in VG ceph-a549c28d-9b06-46d5-8ba3-3bd99ff54f57 is using an old PV header, modify the VG to update.
stderr: WARNING: PV /dev/sda in VG ceph-70943bd0-de71-4651-a73d-c61bc624755f is using an old PV header, modify the VG to update.
stdout: Physical volume "/dev/sds" successfully created.
stdout: Volume group "ceph-1c7cef26-327a-4785-96b3-dcb1b97e8e2f" successfully created
Running command: /usr/sbin/lvcreate --yes -l 3814911 -n osd-block-6aeef34b-0724-4d20-a10b-197cab23e24d ceph-1c7cef26-327a-4785-96b3-dcb1b97e8e2f
stdout: Logical volume "osd-block-6aeef34b-0724-4d20-a10b-197cab23e24d" created.
--> blkid could not detect a PARTUUID for device: /dev/ssd3/wal5
--> Was unable to complete a new OSD, will rollback changes
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.15 --yes-i-really-mean-it
stderr: 2021-04-28T20:05:52.290+0100 7f76bbfa9700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc
/ceph/keyring.bin,: (2) No such file or directory
2021-04-28T20:05:52.290+0100 7f76bbfa9700 -1 AuthRegistry(0x7f76b4058e60) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyrin
g,/etc/ceph/keyring.bin,, disabling cephx
stderr: purged osd.15
--> RuntimeError: unable to use device
I have tried to find a solution, but wasn't able to resolve the problem? I am sure that I've previously added new volumes using the above command.
lvdisplay shows:
--- Logical volume ---
LV Path /dev/ssd3/wal5
LV Name wal5
VG Name ssd3
LV UUID WPQJs9-olAj-ACbU-qnEM-6ytu-aLMv-hAABYy
LV Write Access read/write
LV Creation host, time arh-ibstorage4-ib, 2020-07-29 23:45:17 +0100
LV Status available
# open 0
LV Size 1.00 GiB
Current LE 256
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:6
--- Logical volume ---
LV Path /dev/ssd3/db5
LV Name db5
VG Name ssd3
LV UUID FVT2Mm-a00P-eCoQ-FZAf-AulX-4q9r-PaDTC6
LV Write Access read/write
LV Creation host, time arh-ibstorage4-ib, 2020-07-29 23:46:01 +0100
LV Status available
# open 0
LV Size 177.00 GiB
Current LE 45312
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:11
How do I resolve the errors and create the new osd?
Cheers
Andrei
Hello,
Last week I upgraded my production cluster to Pacific. the cluster was
healthy until a few hours ago.
When scrub run 4hrs ago left the cluster in an inconsistent state. Then
issued the command ceph pg repair 7.182 to try to repair the cluster but
ended with active+recovery_unfound+degraded
All OSDs are up and all running bluestore with replication of 3 and minimum
size of 2. I have restarted all OSD but still not helping.
Any recommendations on how to recover the cluster safely?
I have attached result of ceph pg 7.182 query
ceph health detail
HEALTH_ERR 1/2459601 objects unfound (0.000%); Possible data damage: 1 pg
recovery_unfound; Degraded data redundancy: 3/7045706 objects degraded
(0.000%), 1 pg degraded
[WRN] OBJECT_UNFOUND: 1/2459601 objects unfound (0.000%)
pg 7.182 has 1 unfound objects
[ERR] PG_DAMAGED: Possible data damage: 1 pg recovery_unfound
pg 7.182 is active+recovery_unfound+degraded, acting [15,1,11], 1
unfound
[WRN] PG_DEGRADED: Degraded data redundancy: 3/7045706 objects degraded
(0.000%), 1 pg degraded
pg 7.182 is active+recovery_unfound+degraded, acting [15,1,11], 1
unfound
ceph -w
cluster:
id: 4b9f6959-fead-4ada-ac58-de5d7b149286
health: HEALTH_ERR
1/2459586 objects unfound (0.000%)
Possible data damage: 1 pg recovery_unfound
Degraded data redundancy: 3/7045661 objects degraded (0.000%),
1 pg degraded
services:
mon: 3 daemons, quorum mon-a,mon-b,mon-c (age 38m)
mgr: mon-a(active, since 38m)
osd: 46 osds: 46 up (since 25m), 46 in (since 3w)
data:
pools: 4 pools, 705 pgs
objects: 2.46M objects, 9.1 TiB
usage: 24 TiB used, 95 TiB / 119 TiB avail
pgs: 3/7045661 objects degraded (0.000%)
1/2459586 objects unfound (0.000%)
701 active+clean
3 active+clean+scrubbing+deep
1 active+recovery_unfound+degraded
ceph pg 7.182 list_unfound
{
"num_missing": 1,
"num_unfound": 1,
"objects": [
{
"oid": {
"oid": "rbd_data.2f18f2a67fad72.000000000002021a",
"key": "",
"snapid": -2,
"hash": 3951004034,
"max": 0,
"pool": 7,
"namespace": ""
},
"need": "184249'118613008",
"have": "0'0",
"flags": "none",
"clean_regions": "clean_offsets: [], clean_omap: 0, new_object:
1",
"locations": []
}
],
"state": "NotRecovering",
"available_might_have_unfound": true,
"might_have_unfound": [],
"more": false
}
Hi,
I have a bucket that has versioning enabled and I am trying to remove
it. This is not possible, because the bucket is not empty:
mark@tuxis:~$ aws --endpoint=https://nl.dadup.eu s3api delete-bucket
--bucket syslog_tuxis_net
An error occurred (BucketNotEmpty) when calling the DeleteBucket
operation: Unknown
It took me a while to determine that `curator` had enabled bucket
versioning, but after I did I removed all versions.
But, when trying to delete the bucket, it still doesn't work because
there are DeleteMarkers, eg:
{
"Owner": {
"DisplayName": "Syslog Backup",
"ID": "DB0220$syslog_backup"
},
"Key": "incompatible-snapshots",
"VersionId": "noeJtvBpV5HINGQTJeXEq5mzlzsWneg",
"IsLatest": true,
"LastModified": "2021-03-22T16:35:18.298Z"
},
I cannot download that object:
mark@tuxis:~$ aws --endpoint=https://nl.dadup.eu s3api get-object
--bucket syslog_tuxis_net --key incompatible-snapshots /tmp/foobar
An error occurred (NoSuchKey) when calling the GetObject operation:
Unknown
Nor can I delete the object:
mark@tuxis:~$ aws --endpoint=https://nl.dadup.eu s3api delete-object
--bucket syslog_tuxis_net --key incompatible-snapshots
So, according to
https://ceph.io/planet/on-ceph-rgw-s3-object-versioning/#on-delete-marker
I should be able to delete that DeleteMarker with this command:
mark@tuxis:~$ aws --endpoint=https://nl.dadup.eu s3api delete-object
--bucket syslog_tuxis_net --key incompatible-snapshots --version-id
noeJtvBpV5HINGQTJeXEq5mzlzsWneg
But that command does not give any output, and it does not delete the
marker either.
So I'm stuck with that bucket which I would like to remove without
abusing radosgw-admin.
This cluster is running 12.2.13 with civetweb rgw's behind a haproxy
setup. All is working fine, except for this versioning bucket. Can
anywone point me in the right direction to remove this bucket as a
normal user?
Thanks!
--
Mark Schouten | Tuxis B.V.
KvK: 74698818 | http://www.tuxis.nl/
T: +31 318 200208 | info(a)tuxis.nl
Hello,
We’ve got some issues when uploading s3 objects with a double slash // in the name, and was wondering if anyone else has observed this issue with uploading objects to the radosgw?
When connecting to the cluster to upload an object with the key ‘test/my//bucket’ the request returns with a 403 (SignatureDoesNotMatch) error.
Wondering if anyone else has observed this behavior and has any workarounds to work with double slashes in the object key name.
- Gavin
hello all,
I faced an incident for one of my very important rbd volumes with 5TB data, which is managed by OpenStack.
I was about to increase the volume size live but I shrinked the volume unintentionally by running a wrong command of "virsh qemu-monitor-command". then I realized it and expand it again, but obviously I lost my data. can you help me or give me hints on how I can recover data of this rbd volume?
unfortunately, I don't have any backup for this part of data, and it's really important (I know I made a big mistake). also, I can't stop the cluster since it's under heavy production load.
It seems qemu has been using the old set of APIs that allows shrinking by default without any warnings even in the latest version. (All other standard ways of resizing volume, does not allow shrinking)
* Ceph: https://sourcegraph.com/github.com/ceph/ceph@luminous/-/blob/src/librbd/lib…
* Qemu: https://sourcegraph.com/github.com/qemu/qemu/-/blob/block/rbd.c#L832
* Libvirt: https://sourcegraph.com/github.com/libvirt/libvirt/-/blob/src/storage/stora…
Hi,
I have 3 pools, where I use it exclusively for RBD images. 2 They are
mirrored and one is an erasure code. It turns out that today I received the
warning that a PG was inconsistent in the pool erasure, and then I ran
"ceph pg repair <pg>". It turns out that after that the entire cluster
became extremely slow, to the point that no VM works.
This is the output of "ceph -s":
# ceph -s
cluster:
id: 4ea72929-6f9e-453a-8cd5-bb0712f6b874
health: HEALTH_ERR
1 scrub errors
Possible data damage: 1 pg inconsistent, 1 pg repair
services:
mon: 2 daemons, cmonitor quorum, cmonitor2
mgr: cmonitor (active), standbys: cmonitor2
osd: 87 osds: 87 up, 87 in
tcmu-runner: 10 active daemons
date:
pools: 7 pools, 3072 pgs
objects: 30.00 M objects, 113 TiB
usage: 304 TiB used, 218 TiB / 523 TiB avail
pgs: 3063 active + clean
8 active + clean + scrubbing + deep
1 active + clean + scrubbing + deep + inconsistent + repair
io:
client: 24 MiB / s rd, 23 MiB / s wr, 629 op / s rd, 519 op / s wr
cache: 5.9 MiB / s flush, 35 MiB / s evict, 9 op / s promote
Does anyone have any idea how to make it available again?
Regards,
Gesiel
Hello,
Thank you very much to pickup the question and sorry for the late response.
Yes, we are sending in cleartext also using HTTPS, but how it should be send if not like this?
Also connected to this issue a bit, when we subscribe a bucket to a topic with non-ACL kafka topic, any operations (PUT or DELETE) is simply blocking and not returning. Not even any error response.
$ s3cmd -c ~/.s3cfg put --add-header x-amz-meta-foo:bar3 certificate.pdf s3://vig-test
WARNING: certificate.pdf: Owner groupname not known. Storing GID=1354917867 instead.
WARNING: Module python-magic is not available. Guessing MIME types based on file extensions.
upload: 'certificate.pdf' -> 's3://vig-test/certificate.pdf' [1 of 1]
65536 of 91224 71% in 0s 291.17 KB/s
Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo(a)agoda.com<mailto:istvan.szabo@agoda.com>
---------------------------------------------------
From: Yuval Lifshitz <ylifshit(a)redhat.com>
Sent: Wednesday, April 21, 2021 10:34 PM
To: Szabo, Istvan (Agoda) <Istvan.Szabo(a)agoda.com>
Cc: ceph-users(a)ceph.io
Subject: Re: [ceph-users] Getting `InvalidInput` when trying to create a notification topic with Kafka endpoint
Hi Istvan,
Can you please share the relevant part for the radosgw log, indicating which input was invalid?
The only way I managed to reproduce that error is by sending the request to a non-HTTPS radosgw (which does not seem to be your case). In such a case it replies with "InvalidInput" because we are trying to send user/password in cleartext.
I used curl, similarly to what you did against a vstart cluster based off of master: https://paste.sh/SQ_8IrB5#BxBYbh1kTh15n7OKvjB5wEOM
Yuval
On Wed, Apr 21, 2021 at 11:23 AM Szabo, Istvan (Agoda) <Istvan.Szabo(a)agoda.com<mailto:Istvan.Szabo@agoda.com>> wrote:
Hi Ceph Users,
Here is the latest request I tried but still not working
curl -v -H 'Date: Tue, 20 Apr 2021 16:05:47 +0000' -H 'Authorization: AWS <accessid>:<signature>' -L -H 'content-type: application/x-www-form-urlencoded' -k -X POST https://servername -d Action=CreateTopic&Name=test-ceph-event-replication&Attributes.entry.8.key=push-endpoint&Attributes.entry.8.value=kafka://<username>:<password>@servername2:9093&Attributes.entry.5.key=use-ssl&Attributes.entry.5.value=true
And the response I get is still Invalid Input
<?xml version="1.0" encoding="UTF-8"?><Error><Code>InvalidInput</Code><RequestId>tx000000000000007993081-00607efbdd-1c7e96b-hkg</RequestId><HostId>1c7e96b-hkg-data</HostId></Error>
Can someone please help with this?
Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo(a)agoda.com<mailto:istvan.szabo@agoda.com><mailto:istvan.szabo@agoda.com<mailto:istvan.szabo@agoda.com>>
---------------------------------------------------
________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io<mailto:ceph-users@ceph.io>
To unsubscribe send an email to ceph-users-leave(a)ceph.io<mailto:ceph-users-leave@ceph.io>