by kogir on 3/24/21, 10:48 PM with 89 comments
by lifthrasiir on 3/26/21, 7:49 AM
Mainly because skin tone modifiers [1] predate the ZWJ mechanism [2]. For hair colors there were two contending proposals [3] [4], one of which doesn't use ZWJ, and the ZWJ proposal was accepted because new modifiers (as opposed to ZWJ sequences) needed the architectural change [5].
[1] https://www.unicode.org/L2/L2014/14213-skin-tone-mod.pdf
[2] https://www.unicode.org/L2/L2015/15029r-zwj-emoji.pdf
[3] https://www.unicode.org/L2/L2017/17082-natural-hair-color.pd...
[4] https://www.unicode.org/L2/L2017/17193-hair-colour-proposal....
[5] https://www.unicode.org/L2/L2017/17283-response-hair.pdf
by devadvance on 3/26/21, 5:57 AM
Another thing worth calling out: you can get involved in emoji creation and Unicode in general. You can do this directly, or by working with groups like Emojination [0].
by rkangel on 3/26/21, 11:15 AM
> The most popular encoding we use is called Unicode, with the two most popular variations called UTF-8 and UTF-16.
Unicode is a list of codepoints - the characters talked about in the rest of the article. These live in a number space that's very big (~2^23 as discussed).
You can talk about these codepoints in the abstract as this article does, but at some point you need to put them in a computer - store them on disk or transmit them over a network connection. To do this you need a way to make a stream of bytes store a series of unicode codepoints. This is an 'encoding', UTF-8 and UTF-16, UTF-32 etc. are different encodings.
UTF-32 is the simplest and most 'obvious' encoding to use. 32 bits is more than enough to represent every codepoint, so just use a 32-bit value to represent each codepoint, and keep them in a big array. This has a lot of value in simplicity, but it means that text ends up taking up a lot of space. Most western text (e.g. this page) fits in the first 127 bits and so for the majority of values, most of the bits will be 0.
UTF-16 is an abomination that is largely Microsoft's fault and is the default unicode encoding on Windows. It is based on the fact that most text in most language fits in the first 65535 unicode codepoints - referred to as the 'Basic Multilingual Plane'. This means that you can use a 16 bit value to represent most codepoints, so unicode is stored as an array of 16-bit values ("wide strings" in MS APIs). Obviously not all Unicode values fit in, so there is the capability to use two UTF-16 values to represent a code-point. There are many problems with UTF-16, but my favourite is that it really helps you to have 'unicode surprises' in your code. Something in your stack that assumes single byte characters and barfs on higher unicode values is well known, and you find it in testing fairly often. Because UTF-16 is a single value for the vast majority of normal codepoints, it makes that worse by making it only happen in a very small number of cases that you will inevitably only discover in production.
UTF-8 is the generally agreed to be the best encoding (particularly among people who don't work for Microsoft). It is a full variable length encoding, so a single codepoint can take 1, 2, 3 or 4 bytes. It has lots of nice properties, but one is that codepoints that are <= 127 encode using a single byte. This means that proper ASCII is valid UTF-8.
by vanderZwan on 3/26/21, 10:27 AM
Well this nerd-sniped me pretty hard
https://next.observablehq.com/@jobleonard/which-unicode-flag...
That was a fun little exercise, but enough time wasted, back to work.
by aglionby on 3/26/21, 10:40 AM
Back in 2015, Instagram did a blog post on similar challenges they came across implementing emoji hashtags [1]. Spoiler alert: they programmatically constructed a huge regex to detect them.
[1] https://instagram-engineering.com/emojineering-part-ii-imple...
by truefossil on 3/26/21, 8:04 AM
by peteretep on 3/26/21, 9:15 AM
> “Ü” is a single grapheme cluster, even though it’s composed of two codepoints: U+0055 UPPER-CASE U followed by U+0308 COMBINING DIAERESIS.
would be a great opportunity to talk about normal form, because there’s also a single code point version: “latin capital letter u with diaeresis”.
by breck on 3/26/21, 5:10 AM
by BlueGh0st on 3/26/21, 9:43 AM
by Hawzen on 3/26/21, 8:55 AM
Unicode is a character set, not an encoding UTF-8, UTF-16, etc. are encodings of that character set
by Robizzle01 on 3/27/21, 12:03 AM
Regarding Windows and flags, I heard it was a geopolitical issue. Basically, to support flag emoji you’d have to decide whether or not to recognize some states (e.g. Taiwan) which can anger other states. Not sure if that’s the real reason or not.
A couple questions I still have: 1. Why make flags multiple code points when there’s plenty of unused address space to assign a single code point? 2. Any entertaining backstories regarding platform specific non-standard emoji, such as Windows ninja cat (https://emojipedia.org/ninja-cat/)? Why would they use those code points rather than ? 3. Is it possible to modify Windows to render emoji using Apple’s font (or a modified Segue that looks like Apple’s)? 4. Which emoji look the most different depending on platform? Are there any that cause miscommunication? 5. Do any glyphs render differently based on background color, e.g. dark mode?
by woko on 3/26/21, 8:18 AM
Why would 2^21 not be a multiple of 2^3?
by mojuba on 3/26/21, 6:49 AM
by mannerheim on 3/26/21, 6:03 AM
Not quite true, you can get US state flags with this as well.
by bewuethr on 3/27/21, 6:58 PM
I have one nit about an omission: in addition to the emoji presentation selector, FE0F, which forces "presentation as emoji", there's also the text presentation selector, FE0E, which does the opposite [1].
The Emoji_Presentation property [2] determines when either is required; code points with both an emoji and a text presentation and the property set to "Yes" default to emoji presentation without a selector and require FE0E for text presentation; code points with the property set to "No" default to text presentation and require FE0F for emoji presentation.
There's a list [3] with all emoji that have two presentations, and the first three rows of the Default Style Values table [4] shows which emoji default to which style.
[1]: https://unicode.org/reports/tr51/#Emoji_Variation_Sequences
[2]: http://unicode.org/reports/tr51/#Emoji_Properties_and_Data_F...
by artur_makly on 3/26/21, 1:42 PM
And how do we as a community propose new icons while considering others to be removed/replaced?
by ijidak on 3/26/21, 5:44 AM
Big thank you to the OP.
by yuntei on 3/26/21, 5:35 AM
by tomduncalf on 3/26/21, 8:09 AM
by zimpenfish on 3/30/21, 1:42 PM
by MrGilbert on 3/26/21, 9:24 AM
by z3t4 on 3/26/21, 12:44 PM
by avipars on 3/26/21, 7:49 PM
by kaeruct on 3/26/21, 2:49 PM
by itsmeamario on 3/26/21, 7:57 AM
by mshenfield on 3/26/21, 1:28 PM
by imtiyaz on 3/26/21, 8:59 AM
by remux on 3/26/21, 7:22 AM