Encoding
As many people are making mistakes with encoding, here is a small text where I try to explain what does exist.
Contents
Introduction
Computers are working on words of bits, usually 8, 16, 32 or 64 today. Those words are meanless for the computer, they can be interpreted in many ways, as numbers, as instructions, as characters, as true/false values...
An encoding is a set of rules defining how to translate a sequence of words into a sequence of characters.
Single byte encoding
As computer can easily work with groups of bit called bytes, one of the first solution was to use a different byte value for every character. It's easy to understand and to program: every byte encode a single character; it's space efficient (memory was really expensive), you can size a string by counting bytes... You can have up to 256 characters as a byte can have 256 different values.
When computers was mostly an occidental story, ASCII was one of the first and most successful encoding. It define 128 characters, including upper case and lower case of the Latin alphabet, digits, punctuation and control. Values above 127 are not defined (allowing extensions or controls values). It is a good set of characters to write in English.
EBCDIC is another byte encoding defined by IBM at the same time which include mostly the sames characters but values above 127 are not free.
As many languages are using characters not included in the original ASCII set; many country, or group of country defined new encoding based on the ASCII encoding. Missing characters are filled in the unused 128 values (above 127). As 128 values is not enough for every earth alphabet, we have many of theses encodings. One of the most successful are ISO-8859 and in particular ISO-8859-1 which include characters for 29 western European and African languages.
Unicode
Theses encoding where 1 byte correspond to 1 character are really great for English, it's working well for languages based on the Latin alphabet, but is really bad for other alphabets. Some of them use more than 256 signs.
You can not write a text with characters from different encoding; and when sending a text, you always have to agree on a common encoding.
Here is Unicode. But first, learn this: Unicode is not an encoding
Unicode is an industry standard where every character in any real language, past or present (now over 100,000 characters), is having a different numeric value called a code-point. This code-point can be recorded in many ways. The first 128 code-points are the same values used in the ASCII encoding.
Did you remember, Unicode is not an encoding? Ok, in fact, Unicode also defines many different encodings to store code-points.
Multi-bytes encoding
One word = One character was a great rule, let's keep it. When Unicode was young, Code-point were all under 65536. So we can represent every character with a 2 bytes word, (a 16 bits word). This is called UCS2. UCS2 is deprecated.
Now, Code-point are under 2^32, so we can represent every character with a 4 bytes word, (a 32 bit word). This is called UCS4, or UTF-32.
For an English text, because every character is encoded with 4 bytes, the size is 4 time bigger than the same text in ASCII encoding.
Variable byte length encoding
UCS2 and UCS4 are great, but it takes too much space, and if your old program does work with 1 byte = 1 character, it will not work. We need an encoding where ASCII characters are the same, but with the ability to encode any Unicode character.
So UTF-8 was created. It's a variable-byte length encoding. ASCII character are the same as in an ASCII encoded text; and other characters are encoded using 2, 3 or 4 characters. You should use this for all your SWORD files now.
For programs written to use UCS2, there is also an extension, UTF-16 is designed to use a single 2 bytes word for characters under 65536, and a pair of words above.
Common mistakes
If you use a wrong encoding, some characters will be displayed incorrectly as the computer decoded it using incorrect rules.
The two most common encoding are ISO-8859-1 and UTF-8
An UTF-8 text decoded as ISO-8859-1 (or ISO-8859-*)
You will see two or three characters for non-ASCII ones, usually à and ©.
Et Dieu vit la lumière, qu'elle était bonne; et Dieu sépara la lumière d'avec les ténèbres.
will show as
Et Dieu vit la lumière, qu'elle était bonne; et Dieu sépara la lumière d'avec les ténèbres.
An ISO-8859-1 text decoded as UTF-8
The UTF-8 decoder will find an invalid multi-byte character, it can report an error, or put question marks or squares instead:
Et Dieu vit la lumière, qu'elle était bonne; et Dieu sépara la lumière d'avec les ténèbres.
will show as
Et Dieu vit la lumi?re, qu'elle ?tait bonne; et Dieu s?para la lumi?re d'avec les t?n?bres.
Warning: If you see squares, it can also be a font problem! This happen if you're using a font which does not define some characters you are using.
A Mac:Central Europe text decoded as ANSI
Example: (found in the 2TGreek module conf file)
the individual Gšttingen editions that have appeared since 1935. ... (Stuttgart: WŸrttembergische Bibelanstalt, 1935; repr. in 9th ed., 1971).
This was converted by means of EditPad Lite and re-encoded as UTF-8
the individual Göttingen editions that have appeared since 1935. ... (Stuttgart: Württembergische Bibelanstalt, 1935; repr. in 9th ed., 1971).
Using a font that does not include the characters you want
If you use accented Greek characters, for example, but do not use a Unicode font that supports precomposed Greek characters (like @Arial Unicode MS) accented characters will appear as empty rectangles in your text. In BibleCS you can change the font in Options/Preferences/Display.
Normalization
For module making, it is strongly recommended that Unicode source files encoded as UTF-8 are normalized to NFC. Combining characters are permitted in source text, e.g. for diacriticals where there is no precomposed character in the Unicode Standard.[1]
Searching in modules
The simple rule is that if a search request and the indexed text are not normalized the same, there will not be a hit.
Modules are prepared for search using StripFilter mechanisms. Front-ends should be sure to call SWModule::StripText() on the user input before passing to the search method to make sure both are normalized the same.
Regarding rendering, each front-end should not assume that the module is encoded in a way that works for it. When we did experiments, NFC was the best across the widest variety of front-ends. But no one way was best for every script, font or display engine. It would be best for each front-end to normalize the text before display. This could be different than the normalization used for search.
The situation is made even more complicated by changes in the Unicode standard in respect of normalization. How the characters for some languages normalize to NFC has changed in later Unicode versions. One example is in the Myanmar block of the BMP. This has implications for existing and requested modules for Bible translations that use the Burmese script. i.e. for the Judson 1835 translation, and for the S'gaw Karen translation.[2][3]
Notes:
- ↑ Be aware, however, that many fonts in many operating systems lack such ability, and that many of our front-ends use rendering widgets which could easily barf on such functionality. Xiphos and BibleTime ought to be OK from a renderer's perspective, provided a good system font can be found, but it may be that BibleDesktop would suffer. BibleCS probably has access to good fonts, but it uses very different technologies, so check that one carefully. BPBible, especially older versions, might have a tough time with advanced character sets. Eloquent is probably fine!
- ↑ The process of converting a string to NFC or NFD requires a stage called "canonical ordering", whereby codepoints are reordered in ascending order according to their canonical combining class [ccc].
- ↑ Unicode normalization can easily break Biblical Hebrew text. See on page 9 in the SBL Hebrew Font User Manual.
End-of-line characters
Some Sword utilities may barf when used on text files with Mac style EOLs. Most Unicode text editors include a menu option to convert EOLs to Unix or Windows style.
See also
External links
- TECkit – a Text Encoding Conversion toolkit.
- As well as being able to convert between encodings, it can also be used to convert text from one script to another, for any language that can be represented by more than one script. Example: Some languages in Central Asia can be represented in either Latin or Cyrillic script.
- Alan Wood’s Unicode Resources – Unicode and Multilingual Support in HTML, Fonts, Web Browsers and Other Applications (including text editors)
- A survey of Unicode compression – by Doug Ewell.
- What's new in Unicode 6.1 ? & What's new in Unicode 6.2 ? – by Andrew West.
- Unicode Normalization Forms – Unicode Standard Annex #15
- Unicode 8.0.0 – released 2015-06-17