Hi everyone,
I'm hoping we can resolve some of these questions over email and keep our
discussion short. And I think we shoudl schedule a call as early monday
as we can to try to finalize this. I'm sorry I couldn't do any of this
last week--for the last week of my trip we only had a satellite link in
the lounge area, and it was super flaky (e.g., went out when the clouds
came in in the evening).
I'm sending an invite.. let me know if this should be moved forward or
back.
Assuming we're still targetting 3 tracks (not 4, as the draft agenda Jen
sent shows), then we have space for ~45 talks.
If we accept the tutorial (highly rated at 4.6) then that would consume 2
or 3 slots, leaving 42-43 talk slots. I'm thinking 2 slots (1.5 hours)
would be sufficient? So, 43 talks.
Lightning talks would take 1 or 2 slots (probably 2), so 41 talks.
If we follow a similar keynote format to barcelona, then we would have:
- 1 opening keynote, what's new, etc.
- 1 invited keynote
- 4 platinum keynotes (15m each)
- 4 gold keynotes (8m each)
- CLT panel
I think the simplest thing would be to pick one of the submitted talks and
invite them to present in the keynote slot. The normal talks are 40m and
the keynote is usually a bit shorter (30m), but we can probably be
flexible. There are a few talks that have been tagged as candidates:
- Matthew Leonard @ Bloomberg (we added 6 PB of storage and noone
noticed)
- Tom Byrne (50PB of ceph adventures). His talks are consistently good
and very well received.
- BY Chen (multi-pb scale storage using ceph in taiwan computing cloud)
The other one that caught my eye is NAVER. Interesting because they are
South Korean, and it's ceph + openstack + kubernetes (checks all boxes).
I've never seen them present, though. I think Lars knows more about them?
Anyway, assuming we pick one of those, then we'd have 41 additional slots
to fill, for a total of 43 again. Right now that cutoff is around the
average rating of 3.6.
If we start breaking this down into tracks/topics, through row 44,
ignoring the lightnight talks, then we have the below. I tagged a few of
these as candidate for moving to CDS on Tuesday... I think that 200 is a
definite, and 203 and 64 are maybes.. what do you all think? 64
especially seems like it warrants a realtime conversation and not a preso.
203 sounds like they went and implemented somethign so it may warrant a
talk, but it makes me nervous since it's not upstream. 59 (deepika's
observability talk) could also be during CDS but I think it might be nicer
to showcase the work with a full talk.. which makes me think li's 203
should get one too?
crimson:
110 crimson status (kefu/radek)
188 seastore (sam)
40 crimson-osd with alien from chunmei (i'd vote yes on this?)
no: 51 crimson messenger
no: 79 spdk one from intel
user:
136 all nvme (at&t)
149 we added 6 pb of ceph (bloomberg)
177 50pb of ceph (STFC)
130 staying ahead of the curve (dan/CERN)
158 multi-pb at twcc (asus and suse)
55 object at scale, devops perspective (flipkart)
76 20k rbs volumes k8s on openstack (naver)
163 rgw road to production, security challenges at workday (workday)
ops:
150 battling alert fatigue with prometheus (mnaser vexxhost)
31 ceph traps and pitfalls (xie zte)
23 1 year tuning ceph distilled (david byte suse)
community:
88 ceph in china and how to make it bigger (intel)
111 tracking cephs in the wild (lmb)
rook:
206 tutorial (msft + sk telecom)
118 rook best practices (blaine suse)
82 make storage scale with rook-ceph (seb red hat)
77 whatever can go wrong will (sagy red hat)
dev:
120 ceph on windows
[CDS] 200 rgw needs microserves (robin digitalocean)
90 optimizing ceph on arm (arm)
[CDS?] 203 *store and persistent memory (igor and tushar)
59 reworking observability in ceph (deepika red hat)
rbd:
38 disaster recovery
[CDS?] 64 qos for rbd (liwang didi)
rgw:
116 rgw multisite, ocotpus and beyond (yehuda and casey)
127 demystifying access mgmt (abhishek)
162 rgw with billions of objects (LINE)
115 automating data pipelines with ceph, knative, strimzi (red hat)
133 rgw capabilities and future plans (matt + uday/red hat)
cephfs:
83 samba clustering improvements (david disseldorp suse)
104 schedule snapshots to async rep (jan, suse)
rados:
100 bluestore best practices (vikyat red hat)
202 stretch clusters (greg red hat)
63 global dedup (myoungwon samsung)
151 disaggregating with nvmeof (zoltan ibm)
misc:
197 orchestrator (swagner suse)
91 dashboard (lens suse)
I think these ones are the mostly shoe-ins. If we move those 2 to
CDS, and promote one user talk to a keynote, then we can accept 7 more.
If we consolidate blaine and seb's rook talks (or seb and sagy's, maybe)
then we can take 8 more. That's I think where it gets interesting since
there are a lot of talks around the row 45-65 range (3.4-3.6 rating) where
we should exercise editorial control...
Listing them here:
187 gosbench (christopher blum red hat)
15 putting compute in your storage (federico red hat)
139 migrating to new hw without disruption (tyler brekke red hat)
152 fixing ceph 101 (mnaser vexxhost)
140 decoupling ceph (bluestore nvmf) (arun intel)
185 high perf cephfs for ebay loggying (xiaoxi ebay)
13 ceph security (danny DT)
92 ceph on public clouds (josh and orit, red hat)
94 demystifying perf on k8s (aakarsh, dustin red hat)
101 large scale ceph on arm (yaowei china mobile)
103 large scale mgmt (china mobile)
[CDS?] 108 rbd snaps independent with cow/radix trees (roman suse)
97 teuthology status and future (kyr suse)
51 crimson messesnger (yingxin intel)
67 crash consistent client cache (lisa intel)
135 building storage ai/ml platform (sherard red hat)
142 turbocharging rook database workloads (alex calhoun red hat)
194 object bucket claims in rook (jiffin red hat)
28 ceph in the real world, what we learned (safespring)
25 ceph in ram (diamond light source)
78 cloud functions with knative (song LINE)
If there are any talks below line 71 that anyone wants to nominate to
rescue/consider, let's add them to this list? Skimmign the list I see the
mulitiple upstream/downstream community talks, none of which made the cut.
Maybe we can accept one or more of those as lightning talks?
I'm ignoring the lightning talks for the moment. I think we can choose
those quickly but just picking which ones (if any) we don't want. :)
Doc links, in case you need them:
https://docs.google.com/spreadsheets/d/1bYCepH57ZjhuE3WoFUNQLfNOL2h2ti6Kln0…
https://docs.google.com/spreadsheets/d/1zSK6g7eadJ6cjT7dNLnO6yRseMLuKWNcMiP…
sage
_______________________________________________
Cephalocon-seoul-2020-pc mailing list -- cephalocon-seoul-2020-pc(a)ceph.io
To unsubscribe send an email to cephalocon-seoul-2020-pc-leave(a)ceph.io
_______________________________________________
Cephalocon-seoul-2020-pc mailing list -- cephalocon-seoul-2020-pc(a)ceph.io
To unsubscribe send an email to cephalocon-seoul-2020-pc-leave(a)ceph.io