Hi all,
I am trying to move an encrypted file that was uploaded with sse-c-key inside an S3 bucket so i can rename the file in the same S3 bucket using aws cli version: 2.0.24, but i keep getting the error message shown below. This error message only happens when i am using move or copy on an encrypted object where the source and destination are the same bucket. When i copy an encrypted object from the bucket to a local machine and vice versa it works just fine.
Command:
aws s3 cp --endpoint=https://store-test-one.ddns.net --sse-c AES256 --sse-c-key 1234567890123456789012 3456789012 s3://one-disk/systemds7.txt s3://one-disk/systemds8.txt
Error Message:
copy failed: s3://one-disk/systemds7.txt to s3://one-disk/systemds8.txt An error occurred (NotImplemented) when calling the CopyObject operation: Unknown
Anyone have any ideas why it fails when using move/copy on an encrypted object with sse-c-key where the source and destination is the same bucket?
Thanks
Norton Setup is the best antivirus software that allows you to protect your files as well as your system from various malware and viruses that are willing to attack your systems.
Norton antivirus product is designed in a way to scan and eliminate any kind of possible threats such as Trojans, worms and viruses from affecting your personal computer, laptop, smartphone and tablet in the first place. Unlike other antivirus program, Norton holds onto a unique heuristics system to quickly identify the viruses that makes it stand apart as the best security service provider in the masses.
You can purchase it from- https://w-wwnorton.com/setup/.
Hi everyone,
After trigger command radosgw-admin sync error list, Ceph returns the list of errors for each shards in the system e.g.,
{
"shard_id": 30,
"entries": [
{
"id": "1_1592308047.968415_451.1",
"section": "data",
"name": "portal-images:76fc5fe2-9f89-4419-b611-ab275000b358.405220.1:8",
"timestamp": "2020-06-16T11:47:27.968415Z",
"info": {
"source_zone": "30bae889-dc13-4957-a536-028394095356",
"error_code": 5,
"message": "failed to sync bucket instance: (5) Input/output error"
}
}
]
}
For further detail, I triggered the command radosgw-admin data sync status --shard-id=30 --source-zone=dc-02
{
"shard_id": 30,
"marker": {
"status": "full-sync",
"marker": "",
"next_step_marker": "",
"total_entries": 0,
"pos": 0,
"timestamp": "0.000000"
},
"pending_buckets": [],
"recovering_buckets": []
}
Seems like there is no problem in this shard so far…
Can anyone help me on what I have to do to remove the errors above?
Many thanks!
--
Nghia Viet Tran (Mr)
mgm technology partners Vietnam Co. Ltd
7 Phan Châu Trinh
Đà Nẵng, Vietnam
+84 935905659
nghia.viet.tran(a)mgm-tp.com<mailto:nghia.viet.tran@mgm-tp.com>
www.mgm-tp.com<https://www.mgm-tp.com/en/>
Visit us on LinkedIn<https://www.linkedin.com/company/mgm-technology-partners-vietnam-co-ltd> and Facebook<https://www.facebook.com/mgmTechnologyPartnersVietnam>!
Innovation Implemented.
General Director: Frank Müller
Registered office: 7 Pasteur, Hải Châu 1, Hải Châu, Đà Nẵng
MST/Tax 0401703955
Hi all,
I tried to use Cephadm as non-root, it works, until I tried to install a new osd.
I get this Error Message:
Error EINVAL: Traceback (most recent call last):
File "/usr/share/ceph/mgr/mgr_module.py", line 1167, in _handle_command
return self.handle_command(inbuf, cmd)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 113, in handle_command
return dispatch[cmd['prefix']].call(self, cmd, inbuf)
File "/usr/share/ceph/mgr/mgr_module.py", line 311, in call
return self.func(mgr, **kwargs)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 75, in <lambda>
wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 66, in wrapper
return func(*args, **kwargs)
File "/usr/share/ceph/mgr/orchestrator/module.py", line 715, in _daemon_add_osd
raise_if_exception(completion)
File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 633, in raise_if_exception
raise e
RuntimeError: cephadm exited with an error code: 1, stderr:Failed to execute command: sudo /usr/bin/cephadm --image docker.io/ceph/ceph:v15.2.4 ceph-volume --fsid 1234abcd --config-json - -- lvm prepare --bluestore --data /dev/sdb --no-systemd
Has someone a hint how I can solve this problem?
Thanks,
Michael
anyone seen this error on the new Ceph 15.2.4 cluster using cpehadm to manage it ?
Module 'cephadm' has failed: auth get failed: failed to find client.crash.ceph0-ote in keyring retval:
Hi List
This is a rephrase of an earlier question that puzzels me. I took out a
disk on nautilus 14.2.8 with 'ceph crush reweight osd.111 0'. I expected
that PGs would mainly go to osds on other nodes but to my surprise most
PG's ended up on osds on the same node. Here is an overview of the number
of PG's that got send to an other osd. The node that had osd 111 has osds
108-116 as well
NR PG's OSD_ID
1 61
1 83
1 86
3 108
4 109
5 110
2 112
5 113
7 114
5 115
2 116
As you can see only 3 PG's were send to other node's osd's. The rest was
send to osds on the same node :-O. There are 81 osds in the same room as
osd.111. I would have expected (certainly since node weight decreases with
crush reweight) setting to 0 that PG's would mostly go to other nodes. Can
someone explain this behaviour?
All pools make use of crush_rule 1, min_size = 2, size = 3 with the
following rules:
"rule_id": 1,
"rule_name": "hdd",
"ruleset": 1,
"type": 1,
"min_size": 2,
"max_size": 3,
"steps": [
{
"op": "take",
"item": -31,
"item_name": "DC3"
},
{
"op": "choose_firstn",
"num": 0,
"type": "room"
},
{
"op": "chooseleaf_firstn",
"num": 1,
"type": "host"
},
]
I cannot attache the complete osd tree but the structure is:
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT
PRI-AFF
-31 1275.35522 root DC3
-32 600.69037 room az-a
-39 133.14589 rack rack_W26
-27 66.57289 host st3g3psm2
108 hdd 7.39699 osd.108 up 1.00000
1.00000
109 hdd 7.39699 osd.109 up 1.00000
1.00000
110 hdd 7.39699 osd.110 up 1.00000
1.00000
111 hdd 7.39699 osd.111 up 1.00000
1.00000
112 hdd 7.39699 osd.112 up 1.00000
1.00000
113 hdd 7.39699 osd.113 up 1.00000
1.00000
114 hdd 7.39699 osd.114 up 1.00000
1.00000
115 hdd 7.39699 osd.115 up 1.00000
1.00000
116 hdd 7.39699 osd.116 up 1.00000
1.00000
with 3 rooms, 13 racks, 19 hosts, 172 osds, 3968 PG's, 10 pools. As you
see there are racks in the tree but these are not taken into consideration
in the crush rule 1. The cluster is not yet fully in use hence the PG to
OSD ratio is still low. We expect more pools to be added in the near
future
Marcel
I need help about add node when install ceph with cephadm .
When i run cpeh orch add host ceph2
error enoent: new host ceph2 (ceph2) failed check: ['traceback (most recent call last):',
Please help me fix it.
Thanks & Best Regards
David
Dear Support,
I need help about add node when install ceph with cephadm .
When i run cpeh orch add host ceph2
error enoent: new host ceph2 (ceph2) failed check: ['traceback (most
recent call last):',
Please help me fix it.
Thanks & Best Regards
David
Hi,
I am trying to profile the number of invocations to a particular function
in Ceph source code. I have instrumented the code with time functions.
Can someone please share the script for compiling and running the Ceph
source code? I am struggling with it. That would be great help !
BR
Bobby !