I did a set of 30GB OSDs before with extra disk space on my SSDs for the metadata pool on cephfs and my entire cluster locked up about 3 weeks later. Some metadata operation was happening, filled some of the 30GB disks to 100%, and all IO was blocked in the cluster. I did some trickery of deleting 1 copy of a few PGs on each OSD, such that I still had at least 2 copies of each PG, and was able to backfill the pool back onto my HDDs and restore cluster functionality. I would say that trying to use that space is definitely not worth it.
In one of my production clusters I occasionally get a warning state that an omap object is too large in my buckets.index pool. I could very easily imagine that stalling the entire cluster if my index pool were on such small OSDs.