Hi, cephers:
What's the purpose of using LogEvent with empty metablob?
For example in link/unlink operation cross two active mds,
when slave receives OP_FINISH it will write an ESlaveUpdate::OP_COMMIT
to the journal, then
send OP_COMMITTED to master. When master receives OP_COMMITTED it will
write an ECommitted to the journal then allow previously logged
journal to be trimmed.
Why are these two logevents necessary?
I guess they are originally used for a scene that crashes happen,
but in my opinion it seems not necessary. For example,
if cash happens, after failed mds are brought up again, in resolve
stage, master will resend OP_FINISH to slave, then things will
continue as expected.
Could anyone give some tips on this doubt?
Sincerely thanks!
Hi,
From a healty nautilus cluster version 14.2.9 on a CentOS7 i try to follow the upgrade procedure to the containerized octopus setup with cephadm.
* https://docs.ceph.com/docs/octopus/cephadm/adoption/
Everything step went fine until i wanted to adopt the osds, then i get a error. Does anybody have a idea, what my problem could be?
````````````````````
~# cephadm adopt --style legacy --name osd.0
INFO:cephadm:Pulling latest docker.io/ceph/ceph:v15 container...
INFO:cephadm:objectstore_type is bluestore
INFO:cephadm:Stopping old systemd unit ceph-osd@0...
INFO:cephadm:Disabling old systemd unit ceph-osd@0...
INFO:cephadm:Moving data...
Traceback (most recent call last):
File "/usr/sbin/cephadm", line 4282, in <module>
r = args.func()
File "/usr/sbin/cephadm", line 972, in _default_image
return func()
File "/usr/sbin/cephadm", line 2916, in command_adopt
command_adopt_ceph(daemon_type, daemon_id, fsid);
File "/usr/sbin/cephadm", line 2979, in command_adopt_ceph
os.rmdir(data_dir_src)
OSError: [Errno 39] Directory not empty: '//var/lib/ceph/osd/ceph-0'
````````````````````
Yours,
bbk
We're glad to announce the availability of the ninth and very likely the
last stable release in the Ceph Mimic stable release series. This
release fixes bugs across all components and also contains a RGW
security fix. We recommend all mimic users to upgrade to this version.
We thank everyone for making this release a possibility.
Notable Changes
---------------
* CVE-2020-1760: Fixed XSS due to RGW GetObject header-splitting
* The configuration value `osd_calc_pg_upmaps_max_stddev` used for upmap
balancing has been removed. Instead use the mgr balancer config
`upmap_max_deviation` which now is an integer number of PGs of deviation
from the target PGs per OSD. This can be set with a command like
`ceph config set mgr mgr/balancer/upmap_max_deviation 2`. The default
`upmap_max_deviation` is 1. There are situations where crush rules
would not allow a pool to ever have completely balanced PGs. For example, if
crush requires 1 replica on each of 3 racks, but there are fewer OSDs in 1 of
the racks. In those cases, the configuration value can be increased.
* The `cephfs-data-scan scan_links` command now automatically repair inotables
and snaptable.
For the full changelog please refer to the official release blog entry
at https://ceph.io/releases/v13-2-9-mimic-released
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-13.2.9.tar.gz
* For packages, see
http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 58a2a9b31fd08d8bb3089fce0e312331502ff945
--
Abhishek Lekshmanan
SUSE Software Solutions Germany GmbH
GF: Felix Imendörffer
Hi
Thanks for the quick response.
To be honest my cluster is getting full because of that trash and I am
at the point where I have to do the removal manually ;/.
Kind regards / Pozdrawiam,
Katarzyna Myrek
czw., 16 kwi 2020 o 13:09 EDH - Manuel Rios
<mriosfer(a)easydatahost.com> napisał(a):
>
> Hi,
>
> From my experience orphans find didn't work since several releases ago, and command should be re-coded or deprecated because its not running.
>
> Im our cases it loops over generated shards until RGW daemon crash.
>
> Interested into this post, in our case orphans find takes more than 24 hours into start loop over shards, but never pass the shard 0 or 1.
>
> CEPH RGW devs, should provide any workaround script/ new tool or something to maintain our rgw clusters. Because with the last bugs all rgw cluster got a ton of trash, wasting resources and money.
>
> And manual cleaning is not trivial and easy.
>
> Waiting for more info,
>
> Manuel
>
>
> -----Mensaje original-----
> De: Katarzyna Myrek <katarzyna(a)myrek.pl>
> Enviado el: jueves, 16 de abril de 2020 12:38
> Para: ceph-users(a)ceph.io
> Asunto: [ceph-users] RGW and the orphans
>
> Hi
>
> Is there any new way to find and remove orphans from RGW pools on Nautilus? I have found info that "orphans find" is now deprecated?
>
> I can see that I have tons of orphans in one of our clusters. Was wondering how to safely remove them - make sure that they are really orphans.
> Does anyone have a good method for that?
>
> My cluster mostly has orphans from multipart uploads.
>
>
> Kind regards / Pozdrawiam,
> Katarzyna Myrek
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io To unsubscribe send an email to ceph-users-leave(a)ceph.io
Hey folks,
I keep getting ceph health warnings about clients failing to respond to cache pressure. They always refer to sessions from ganesha exports. I've read all threads regarding this issue, but none of my changes resolved it. What I’ve done so far:
Ganesha.conf:
MDCACHE {
Dir_Chunk = 0;
NParts = 1;
Cache_Size = 1;
}
Attr_Expiration_Time = 0 in every export
mds_cache_memory_limit = 17179869184 on MDS Servers
I even set “client_oc = false” on the ganesha server, but this doesn’t seem to be applied
My setup is ceph version 14.2.8 (All Server and clients. One active MDS. Ganesha 2.8.3 runs on a dedicated Server.
ceph daemon mds.<active_mds> dump_mempools (filtered out the empty pools):
"mempool": {
"by_pool": {
"bloom_filter": {
"items": 1414723,
"bytes": 1414723
},
"buffer_anon": {
"items": 180992,
"bytes": 2404306271
},
"buffer_meta": {
"items": 178660,
"bytes": 15722080
},
"osdmap": {
"items": 4121,
"bytes": 75912
},
"mds_co": {
"items": 221728924,
"bytes": 16320868177
},
},
"total": {
"items": 223507420,
"bytes": 18742387163
Every hint how to resolve the issue are welcome. If more information is needed, I am glad to provide it.
Regards Felix
-------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt
-------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------
Hi
Is there any new way to find and remove orphans from RGW pools on
Nautilus? I have found info that "orphans find" is now deprecated?
I can see that I have tons of orphans in one of our clusters. Was
wondering how to safely remove them - make sure that they are really
orphans.
Does anyone have a good method for that?
My cluster mostly has orphans from multipart uploads.
Kind regards / Pozdrawiam,
Katarzyna Myrek
Hello All,
Reading the documentation I created a multisite with luminous version.
I would like to know if it sync in one way only.
Using s3cmd if I put a file in a bucket on the primary zone I can see the
file in the same bucket on the secondary zone.
If I put a file in the bucket on secondary zone I cannot see the file on
the primary.
Is this the correct behaviour ?
Thanks & Regards
Ignazio