On Mon, Dec 2, 2019 at 12:48 PM Marc Roos <M.Roos(a)f1-outsourcing.eu> wrote:
>
>
> Hi Ilya,
>
> >
> >
> >ISTR there were some anti-spam measures put in place. Is your account
> >waiting for manual approval? If so, David should be able to help.
>
> Yes if I remember correctly I get waiting approval when I try to log in.
>
> >>
> >>
> >>
> >> Dec 1 03:14:36 c04 kernel: ceph: build_snap_context 100020c9287
> >> ffff911a9a26bd00 fail -12
> >> Dec 1 03:14:36 c04 kernel: ceph: build_snap_context 100020c9283
> >
> >
> >It is failing to allocate memory. "low load" isn't very specific,
> >can you describe the setup and the workload in more detail?
>
> 4 nodes (osd, mon combined), the 4th node has local cephfs mount, which
> is rsync'ing some files from vm's. 'low load' I have sort of test setup,
> going to production. Mostly the nodes are below a load of 1 (except when
> the concurrent rsync starts)
>
> >How many snapshots do you have?
>
> Don't know how to count them. I have script running on a 2000 dirs. If
> one of these dirs is not empty it creates a snapshot. So in theory I
> could have 2000 x 7 days = 14000 snapshots.
> (btw the cephfs snapshots are in a different tree than the rsync is
> using)
Is there a reason you are snapshotting each directory individually
instead of just snapshotting a common parent?
If you have thousands of snapshots, you may eventually hit a different
bug:
https://tracker.ceph.com/issues/21420https://docs.ceph.com/docs/master/cephfs/experimental-features/#snapshots
Be aware that each set of 512 snapshots amplify your writes by 4K in
terms of network consumption. With 14000 snapshots, a 4K write would
need to transfer ~109K worth of snapshot metadata to carry itself out.
Thanks,
Ilya
Hi,
I have a CephFS instance and I am also planning on also deploying an
Object Storage interface.
My servers have 2 network boards each. I would like to use the current
local one to talk to Cephs clients (both CephFS and Object Storage)
and use the second one to all Cephs processes to talk one to the
other.
I'm quite sure that Ceph has support for this kind of setup but I
can't find how to do such a thing.
I already have a "public network" setting that, AFAIU, sets the
interface used by Ceph to talk to clients but I don't know if it
defines CephFS MDS, RGW, or all of them.
And I can't see how to make the different Ceph processes to talk to
each other through the other interface. I found "mon_host" setting but
should I change it to get what I want? And what else?
Is there some example of such a setup that I could learn from?
Or maybe someone would be kind enough to scratch a basic config to
achieve this setup?
Or maybe there is some documentation that deals with such a scenario
that I haven't found.
Well, I'm in search for more info, any kind.
Thanks in advance for your help and attention,
Rodrigo Severo