Hello all,
is it allowed to configure and activate a cache tier for a pool that contains used RBD images?
The documentation (https://docs.ceph.com/docs/mimic/rados/operations/cache-tiering/) doesn't say anything about this, but we have experienced errors with our VMS consuming ceph rbd volumes.
Is there any other preparation measure before we would issue the set-overlay command?
Kind regards,
Laszlo
Hi,
I got involved in a case where a Nautilus cluster was experiencing MDSes
asserting showing the backtrace mentioned in this ticket:
https://tracker.ceph.com/issues/36349
ceph_assert(follows >= realm->get_newest_seq());
In the end we needed to use these tooling to get one MDS running again:
https://docs.ceph.com/docs/master/cephfs/disaster-recovery-experts/#using-a…
The root-cause seems to be that this Nautilus cluster was running
Multi-MDS with a very high amount of CephFS snapshots.
After a couple of days of scanning (scan_links seems single threaded!)
we finally got a single MDS running again with a usable CephFS filesystem.
At the moment chowns() are running to get all the permissions set back
to what they should be.
The question now outstanding: Is it safe to enable Multi-MDS again on a
CephFS filesystem which still has these many snapshots and is running
single at the moment?
New snapshots are disabled at the moment, so those won't be created.
In addition: How safe is it to remove snapshots? As this will result in
metadata updates.
Thanks
Wido
Hello.
I have a relatively new ceph installation that is only running ceph FS at the moment. We are seeing intermittent issues where "ceph -s" is reporting "MDS report slow requests" and sometimes the MDS crash and take a while to recover/replay or we have to manually restart an mds service to get the state back to HEALTH_OK.
Is there any documentation for recommended configuration?
Here is our cluster setup:
35 total nodes, 88 cores, 512GB ram, 100Gb network
2 ceph fs data pools, 1 is all ssd, the other is nvme
3 active MDS, 1 pinned to the nvme pool/dir, 1 pinned to another large directory, and the third has no pinning
2 standby MDS
ceph config dump:
mds advanced mds_beacon_grace 60.000000
mds basic mds_cache_memory_limit 68719476736
mds advanced mds_cache_trim_threshold 65536
mds advanced mds_recall_max_decay_rate 2.000000
Please let me know if more info is required.
Thanks!
This is to announce the retirement of v13.2.X Mimic stable release
series, and there will no longer be any more backport releases to the
Mimic series. Any more patches to the mimic branch will have to be
tested by the developer submitting the patches and approved by the tech
lead of the respective component before merge to keep the branch stable.
The last release of Mimic was v13.2.10 released on Apr 2020. This is
keeping up with the active 2 stable releases and 24 month support cycle,
which is documented at
https://docs.ceph.com/docs/master/releases/general/#lifetime-of-stable-rele…
Users are requested to upgrade to Nautilus or Octopus.
For the official blog post link please refer to
https://ceph.io/releases/mimic-is-retired/
--
Abhishek Lekshmanan
SUSE Software Solutions Germany GmbH
GF: Felix Imendörffer, HRB 36809 (AG Nürnberg)
Hi all,
I am trying to move an encrypted file that was uploaded with sse-c-key inside an S3 bucket so i can rename the file in the same S3 bucket using aws cli version: 2.0.24, but i keep getting the error message shown below. This error message only happens when i am using move or copy on an encrypted object where the source and destination are the same bucket. When i copy an encrypted object from the bucket to a local machine and vice versa it works just fine.
Command:
aws s3 cp --endpoint=https://store-test-one.ddns.net --sse-c AES256 --sse-c-key 1234567890123456789012 3456789012 s3://one-disk/systemds7.txt s3://one-disk/systemds8.txt
Error Message:
copy failed: s3://one-disk/systemds7.txt to s3://one-disk/systemds8.txt An error occurred (NotImplemented) when calling the CopyObject operation: Unknown
Anyone have any ideas why it fails when using move/copy on an encrypted object with sse-c-key where the source and destination is the same bucket?
Thanks
Norton Setup is the best antivirus software that allows you to protect your files as well as your system from various malware and viruses that are willing to attack your systems.
Norton antivirus product is designed in a way to scan and eliminate any kind of possible threats such as Trojans, worms and viruses from affecting your personal computer, laptop, smartphone and tablet in the first place. Unlike other antivirus program, Norton holds onto a unique heuristics system to quickly identify the viruses that makes it stand apart as the best security service provider in the masses.
You can purchase it from- https://w-wwnorton.com/setup/.
Hi everyone,
After trigger command radosgw-admin sync error list, Ceph returns the list of errors for each shards in the system e.g.,
{
"shard_id": 30,
"entries": [
{
"id": "1_1592308047.968415_451.1",
"section": "data",
"name": "portal-images:76fc5fe2-9f89-4419-b611-ab275000b358.405220.1:8",
"timestamp": "2020-06-16T11:47:27.968415Z",
"info": {
"source_zone": "30bae889-dc13-4957-a536-028394095356",
"error_code": 5,
"message": "failed to sync bucket instance: (5) Input/output error"
}
}
]
}
For further detail, I triggered the command radosgw-admin data sync status --shard-id=30 --source-zone=dc-02
{
"shard_id": 30,
"marker": {
"status": "full-sync",
"marker": "",
"next_step_marker": "",
"total_entries": 0,
"pos": 0,
"timestamp": "0.000000"
},
"pending_buckets": [],
"recovering_buckets": []
}
Seems like there is no problem in this shard so far…
Can anyone help me on what I have to do to remove the errors above?
Many thanks!
--
Nghia Viet Tran (Mr)
mgm technology partners Vietnam Co. Ltd
7 Phan Châu Trinh
Đà Nẵng, Vietnam
+84 935905659
nghia.viet.tran(a)mgm-tp.com<mailto:nghia.viet.tran@mgm-tp.com>
www.mgm-tp.com<https://www.mgm-tp.com/en/>
Visit us on LinkedIn<https://www.linkedin.com/company/mgm-technology-partners-vietnam-co-ltd> and Facebook<https://www.facebook.com/mgmTechnologyPartnersVietnam>!
Innovation Implemented.
General Director: Frank Müller
Registered office: 7 Pasteur, Hải Châu 1, Hải Châu, Đà Nẵng
MST/Tax 0401703955
Hi all,
I tried to use Cephadm as non-root, it works, until I tried to install a new osd.
I get this Error Message:
Error EINVAL: Traceback (most recent call last):
File "/usr/share/ceph/mgr/mgr_module.py", line 1167, in _handle_command
return self.handle_command(inbuf, cmd)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 113, in handle_command
return dispatch[cmd['prefix']].call(self, cmd, inbuf)
File "/usr/share/ceph/mgr/mgr_module.py", line 311, in call
return self.func(mgr, **kwargs)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 75, in <lambda>
wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 66, in wrapper
return func(*args, **kwargs)
File "/usr/share/ceph/mgr/orchestrator/module.py", line 715, in _daemon_add_osd
raise_if_exception(completion)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 633, in raise_if_exception
raise e
RuntimeError: cephadm exited with an error code: 1, stderr:Failed to execute command: sudo /usr/bin/cephadm --image docker.io/ceph/ceph:v15.2.4 ceph-volume --fsid 1234abcd --config-json - -- lvm prepare --bluestore --data /dev/sdb --no-systemd
Has someone a hint how I can solve this problem?
Thanks,
Michael