How does Java 16 bit chars support Unicode?
Javas char is 16 bit, yet Unicode have far more c开发者_如何学Pythonharacters - how does Java deal with that ?
http://en.wikipedia.org/wiki/UTF-16
In computing, UTF-16 (16-bit UCS/Unicode Transformation Format) is a variable-length character encoding for Unicode, capable of encoding the entire Unicode repertoire. The encoding form maps each character to a sequence of 16-bit words. Characters are known as code points and the 16-bit words are known as code units. For characters in the Basic Multilingual Plane (BMP) the resulting encoding is a single 16-bit word. For characters in the other planes, the encoding will result in a pair of 16-bit words, together called a surrogate pair. All possible code points from U+0000 through U+10FFFF, except for the surrogate code points U+D800–U+DFFF (which are not characters), are uniquely mapped by UTF-16 regardless of the code point's current or future character assignment or use.
Java Strings are UTF-16 (big endian), so a Unicode code point can be one or two characters. Under this encoding, Java can represent the code point U+1D50A (MATHEMATICAL FRAKTUR CAPITAL G) using the chars 0xD835 0xDD0A
(String literal "\uD835\uDD0A"
). The Character class provides methods for converting to/from code points.
// Unicode code point to char array
char[] math_fraktur_cap_g = Character.toChars(0x1D50A);
Java uses UTF-16 for strings - basically means that characters are variable width. Most of them fit in 16 bits, but those outside Basic Multilingual Pane occupy 32 bits. It's very similar to UTF-8 scheme.
精彩评论