Hi,
Currently, I trying to create a CNAME record point to a s3 website, for example: s3.example.com => s3.example.com.s3-website.myceph.com. So in this way, my subdomain s3. will have https.
But then only http works. If I go to https://s3.example.com, it shows the metadata of index.html:
This XML file does not appear to have any style information associated with it. The document tree is shown below.
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Name>s3.example.com</Name>
<Prefix/>
<MaxKeys>1000</MaxKeys>
<IsTruncated>false</IsTruncated>
<Contents>
<Key>index.html</Key>
<LastModified>2023-08-24T10:03:14.046Z</LastModified>
<ETag>"8e26caf000875221bf89d95f7f244927"</ETag>
<Size>295</Size>
<StorageClass>STANDARD</StorageClass>
<Owner>
<ID>d92ac19d934a4e9b90e7707372c64996</ID>
<DisplayName>foo(a)example.com</DisplayName>
</Owner>
<Type>Normal</Type>
</Contents>
<Marker/>
</ListBucketResult>
Here is my rgw configuration:
rgw_resolve_cname = true
rgw_enable_static_website = true
rgw_dns_s3website_name = ss-website.example.com
rgw_trust_forwarded_https = true
So how to make the https show the content of index.html (not its metadata)?
Thanks in advance.
Is there going to be another Pacific point release (16.2.14) in the
pipeline?
- Yes, 16.2.14 is going through QA right now. See
https://www.spinics.net/lists/ceph-users/msg78528.html for updates.
Need pacific backport for https://tracker.ceph.com/issues/59478
- Laura will check on this, although a Pacific backport is unlikely due
to incompatibilities from the scrub backend refactoring.
There are inconsistencies with the `ceph config dump` normal vs. json
output. A fix has been proposed in https://tracker.ceph.com/issues/62379.
Question for users: Will this change break any existing automation?
See the tracker for more details, and reach out to @Sridhar Seshasayee
<sseshasa(a)redhat.com> with any questions.
--
Laura Flores
She/Her/Hers
Software Engineer, Ceph Storage <https://ceph.io>
Chicago, IL
lflores(a)ibm.com | lflores(a)redhat.com <lflores(a)redhat.com>
M: +17087388804
Hi folks,
Is it possible through object classes to transform object content? For example, I'd like this transformer to change the content of the object when it is read and when it is written. In this way, I can potentially encrypt the object content in storage without the need to make ceph/osd to do encryption/decryption. It could be taken care of by the object class itself.
Thanks,Yixin
Hey guys,
I'm trying to figure out what's happening to my backup cluster that
often grinds to a halt when cephfs automatically removes snapshots.
Almost all OSD's go to 100% CPU, ceph complains about slow ops, and
CephFS stops doing client i/o.
I'm graphing the cumulative value of the snaptrimq_len value, and that
slowly decreases over time. One night it takes an hour, but other
days, like today, my cluster has been down for almost 20 hours, and I
think we're half way. Funny thing is that in both cases, the
snaptrimq_len value initially goes to the same value, around 3000, and
then slowly decreases, but my guess is that the number of objects that
need to be trimmed varies hugely every day.
Is there a way to show the size of cephfs snapshots, or get the number
of objects or bytes that need snaptrimming? Perhaps I can graph that
and see where the differences are.
That won't explain why my cluster bogs down, but at least it gives
some visibility. Running 17.2.6 everywhere by the way.
Angelo.
Hi everyone,
The User + Dev Monthly Meeting is happening next week on* Thursday, August
24th* *@ 2:00 PM UTC *at this link:
https://meet.jit.si/ceph-user-dev-monthly
(Note that the date has been rescheduled from the original date, August
17th.)
Please add any topics you'd like to discuss to the agenda:
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes
Thanks,
Laura Flores
--
Laura Flores
She/Her/Hers
Software Engineer, Ceph Storage <https://ceph.io>
Chicago, IL
lflores(a)ibm.com | lflores(a)redhat.com <lflores(a)redhat.com>
M: +17087388804
Hi!
I'm trying to figure out how to specify the path for a CephFS subvolume,
as it's intended to represent a user's home directory. By default, it's
located at /volumes/_nogroup/$NAME/$UUID. Is it possible to change this
path somehow, or is using symbolic links the only option?
Thank you
Michal Strnad
Hi,
I need to migrate a storage cluster to a new network.
I added the new network to the ceph config via:
ceph config set global public_network "old_network/64, new_network/64"
I've added a set of new mon daemons with IP addresses in the new network
and they are added to the quorum and seem to work as expected.
But when I restart the OSD daemons, the do not bind to the new addresses. I
would have expected that the OSDs try to bind to all networks but they are
only bound to the old_network.
The idea was to add the new set of network config to the current storage
hosts, bind everything to ip addresses in both networks, shift over
workload, and then remove the old network.
Hi.
Did you manage to access the specific bucket?
Is it possible and if so, how to obtain information about a bucket
storage_class and placement through the API, in addition to the regular
information?
Thanks
Michal
On 2/1/22 18:01, Daniel Iwan wrote:
> It seems that REST API and CLI support listing either all buckets in the
> system or buckets owned by a specific user by providing uid.
> https://docs.ceph.com/en/latest/radosgw/adminops/#get-bucket-info
>
> I would like to list bucket of a specific tenant, possibly with stats,
> therefore I would prefer to avoid fetching information for all buckets and
> filtering in the application layer.
>
> Looks like --tenant in CLI would be ideal, but at the moment it expects
> --uid also be provided.
>
> Is there any way to achieve that at the moment?
> I'm on Ceph 16.2.7
>
> Regards
> Daniel Iwan
> _______________________________________________
> ceph-users mailing list -- ceph-users(a)ceph.io
> To unsubscribe send an email to ceph-users-leave(a)ceph.io
SSD driver works awful when full.
Even if I set DB to ssd for 4 OSDs and theres 2 the dashboard daemon
allocates all the ssd.
I want to partition only 70% of the SSD for DB/WAL and leave the rest for
SSD maneouvering.
There's a way to create an OSD telling manually disk or partitions to user
for data and DB (like the way I used to do it with ceph-deploy)?
--
Alfrenovsky
Hi Ceph community,
My cluster has lots of logs regarding an error that ceph-osd. I am encountering the following error message in the logs:
Aug 22 00:01:28 host008 ceph-osd[3877022]: 2023-08-22T00:01:28.347-0700 7fef85251700 -1 Fail to open '/proc/3850681/cmdline' error = (2) No such file or directory
My cluster is working healthy and I am looking to gain a better understanding of this error and its implications for the system's functioning to avoid protencial issue in the future.
root@ceph001:~# ceph -v
ceph version 16.2.13 (b81a1d7f978c8d41cf452da7af14e190542d2ee2) pacific (stable)