Definitely not. ᚏ and ᚋ are different letter characters. Even with a zero width joiner, this would still read the equivalent of RM. That’s just incorrect. It would be like typing two v’s and calling it a w. Visually, it might look close enough, but it wouldn’t work for e.g. in a hyperlink.
And in any case, that would only end up in a hyperlink if one was very lazy and autoconverted a file name written in Ogham into a hyperlink. If you do the URL manually, you don't have to call it RM
A hyperlink was just one example intended to make comparison with Roman script. A more common issue arises where people get ogham tattoos, for e.g. It wouldn’t be uncommon for someone to transliterate from Roman script to ogham using an online tool. Many other use cases also require accuracy.
But that would only become a usability issue if you had an ogam screenreader and that's just not happening. Which means that either for reading in an online/print document even if the code technically read RM, to human eyes the difference wouldn't be visible. Nobody reads hyperlinks to that extent.
I think you’re underestimating the frequency with which ogham is used, and the variety of ways. A screenreader is by no means the only application which can be affected by - let’s call it what it is - deliberately incorrect spelling for the sake of a graphical approximation. I’ve encountered many.
What makes you call it defective? It was used at least three times in a single source. I suspect it was absolutely deliberate. If we were talking about a Roman script R with a suspension stroke, you would hardly call it a defective spelling just because the intention behind its usage wasn’t clear.
If you think that the six-stroke form is a distinct character from ᚏ then propose it for encoding as a separate character. I doubt the evidence is strong enough to be accepted for encoding, but it has a better chance than a combining stroke character (which imo has zero chance)
If these really are just graphical variants with no difference in meaning or pronunciation, wouldn’t standardised variation sequences be the way to go? That’s how variant forms in the Myanmar, Phags-Pa, and Manichaean scripts are already being handled, and this case seems very similar to those.
I think this would probably require consensus among experts that 6-stroke R is an acceptable variant of 5-stoke R, and that their use cases are not distinct. I do think that standardised variation sequences might be a good way to deal with the character variants of the attested ogham ciphers though.
I agree that SVS would be an acceptable solution, but the UTC has repeatedly stated that it is not willing to define SVS for epigraphic or calligraphic glyph variants (they already rejected my proposal for ogham variation sequences in L2/16-110)
I think it is a reasonable and acceptable solution for representing a rarely-attested defective character form, and better than encoding a combining stroke (or four combining strokes), which almost certainly would not be accepted by the Unicode Technical Committee
The Unicode Standard is perfectly correct to specify that ogham text *should* be rendered left-to-right, but that does not mean you cannot override the default directionality of ogham text using bidirectional control characters, with a font that supports OpenType rtla and rtlm features.
Comments