JavaRush /Java Blog /Random EN /Text encoding ASCII (Windows 1251, CP866, KOI8-R) and Uni...
articles
Level 15

Text encoding ASCII (Windows 1251, CP866, KOI8-R) and Unicode (UTF 8, 16, 32) - how to fix the problem with crackers

Published in the Random EN group
Today we’ll talk about where krakozyabrs come from on a website and in programs, what text encodings exist and which ones should be used. Let's take a closer look at the history of their development, starting with basic ASCII, as well as its extended versions CP866, KOI8-R, Windows 1251 and ending with modern Unicode consortium encodings UTF 16 and 8. Text encoding ASCII (Windows 1251, CP866, KOI8-R) and Unicode (UTF 8, 16, 32) - how to fix the problem with crackers - 1Table of contents: To some, this information may seem unnecessary, but would you know how many questions I receive specifically regarding the crawling krakozyabrs (unreadable set of characters). Now I will have the opportunity to refer everyone to the text of this article and find my own mistakes. Well, get ready to absorb the information and try to follow the flow of the story.

ASCII - basic text encoding for the Latin alphabet

The development of text encodings occurred simultaneously with the formation of the IT industry, and during this time they managed to undergo quite a lot of changes. Historically, it all started with EBCDIC, which was rather dissonant in Russian pronunciation, which made it possible to encode letters of the Latin alphabet, Arabic numerals and punctuation marks with control characters. But still, the starting point for the development of modern text encodings should be considered the famous ASCII (American Standard Code for Information Interchange, which in Russian is usually pronounced as “ask”). It describes the first 128 characters most commonly used by English-speaking users - Latin letters, Arabic numerals and punctuation marks. These 128 characters described in ASCII also included some service characters such as brackets, hash marks, asterisks, etc. In fact, you can see them yourself: Text encoding ASCII (Windows 1251, CP866, KOI8-R) and Unicode (UTF 8, 16, 32) - how to fix the problem with crackers - 2It is these 128 characters from the original version of ASCII that became the standard, and in any other encoding you will definitely find them and they will appear in this order. But the fact is that with the help of one byte of information you can encode not 128, but as many as 256 different values ​​(two to the power of eight equals 256), therefore, after the basic version of Asuka, a whole series of extended ASCII encodings appeared , in which it was possible, in addition to the 128 basic characters can also be encoded using national encoding characters (for example, Russian). Here, it’s probably worth saying a little more about the number systems that are used in the description. Firstly, as you all know, a computer only works with numbers in the binary system, namely with zeros and ones (“Boolean algebra”, if anyone took it at an institute or school). One byte consists of eight bits, each of which represents a two to the power of two, starting from zero, and up to two to the seventh: Text encoding ASCII (Windows 1251, CP866, KOI8-R) and Unicode (UTF 8, 16, 32) - how to fix the problem with crackers - 3 It is not difficult to understand that all possible combinations of zeros and ones in such a construction can only be 256. Convert a number from the binary system to decimal is quite simple. You just need to add up all the powers of two with ones above them. In our example, this turns out to be 1 (2 to the power of zero) plus 8 (two to the power of 3), plus 32 (two to the fifth power), plus 64 (to the sixth power), plus 128 (to the seventh power). The total is 233 in decimal notation. As you can see, everything is very simple. But if you look closely at the table with ASCII characters, you will see that they are represented in hexadecimal encoding. For example, "asterisk" corresponds to the hexadecimal number 2A in Aski. You probably know that in the hexadecimal number system, in addition to Arabic numerals, Latin letters from A (means ten) to F (means fifteen) are also used. Well, to convert a binary number to hexadecimalresort to the following simple method. Each byte of information is divided into two parts of four bits. Those. In each half byte, only sixteen values ​​(two to the fourth power) can be encoded in binary, which can easily be represented as a hexadecimal number. Moreover, in the left half of the byte, the degrees will need to be counted again starting from zero, and not as shown in the screenshot. As a result, we get that the number E9 is encoded in the screenshot. I hope that the course of my reasoning and the solution to this puzzle were clear to you. Well, now let’s continue, in fact, talking about text encodings.

Extended versions of Asuka - CP866 and KOI8-R encodings with pseudographics

So, we started talking about ASCII, which was, as it were, the starting point for the development of all modern encodings (Windows 1251, Unicode, UTF 8). Initially, it contained only 128 characters of the Latin alphabet, Arabic numerals and something else, but in the extended version it became possible to use all 256 values ​​that can be encoded in one byte of information. Those. It became possible to add symbols of letters of your language to Aski. Here we will need to digress once again to explain why text encodings are needed at all and why it is so important. The characters on your computer screen are formed on the basis of two things - sets of vector shapes (representations) of various characters (they are in files with fonts that are installed on your computer) and code that allows you to pull out exactly that one from this set of vector shapes (font file). symbol that will need to be inserted in the right place. It is clear that the fonts themselves are responsible for the vector shapes, but the operating system and the programs used in it are responsible for the encoding. Those. any text on your computer will be a set of bytes, each of which encodes one single character of this very text. The program that displays this text on the screen (text editor, browser, etc.), when parsing the code, reads the encoding of the next character and looks for the corresponding vector form in the required font file, which is connected to display this text document. Everything is simple and banal. This means that in order to encode any character we need (for example, from the national alphabet), two conditions must be met: the vector form of this character must be in the font used, and this character could be encoded in extended ASCII encodings in one byte. Therefore, there are a whole bunch of such options. Just for encoding Russian language characters, there are several varieties of extended Aska. For example, CP866 originally appeared , which had the ability to use characters from the Russian alphabet, and it was an extended version of ASCII. That is, its upper part completely coincided with the basic version of Aska (128 Latin characters, numbers and other crap), which is presented in the screenshot just above, but the lower part of the table with CP866 encoding had the appearance indicated in the screenshot just below and allowed encode another 128 characters (Russian letters and all sorts of pseudo-graphics): Text encoding ASCII (Windows 1251, CP866, KOI8-R) and Unicode (UTF 8, 16, 32) - how to fix the problem with crackers - 4 You see, in the right column the numbers start with 8, because numbers from 0 to 7 refer to the basic part of ASCII (see first screenshot). Thus, the Cyrillic letter “M” in CP866 will have the code 9C (it is located at the intersection of the corresponding line with 9 and column with the number C in the hexadecimal number system), which can be written in one byte of information, and if there is a suitable font with Russian characters this letter will appear in the text without any problems. Where did this amount come from?pseudographics in CP866 ? The whole point is that this encoding for Russian text was developed back in those shaggy years when graphical operating systems were not widespread as they are now. And in Dosa and similar text operating systems, pseudographics made it possible to at least somehow diversify the design of texts, and therefore CP866 and all its other peers from the category of extended versions of Asuka abound in it. CP866 was distributed by IBM, but in addition to this, a number of encodings were developed for Russian language characters, for example, KOI8-R can be attributed to the same type (extended ASCII) : Text encoding ASCII (Windows 1251, CP866, KOI8-R) and Unicode (UTF 8, 16, 32) - how to fix the problem with crackers - 5The principle of its operation remains the same as that of the CP866 described a little earlier - Each character of text is encoded as one single byte. The screenshot shows the second half of the KOI8-R table, because the first half is completely consistent with the basic Asuka, which is shown in the first screenshot in this article. Among the features of the KOI8-R encoding, it can be noted that the Cyrillic letters in its table are not in alphabetical order, as was done in CP866. If you look at the very first screenshot (of the basic part, which is included in all extended encodings), you will notice that in KOI8-R Russian letters are located in the same cells of the table as the corresponding letters of the Latin alphabet from the first part of the table. This was done for the convenience of switching from Russian to Latin characters by discarding just one bit (two to the seventh power or 128).

Windows 1251 - the modern version of ASCII and why the cracks come out

The further development of text encodings was due to the fact that graphical operating systems were gaining popularity and the need to use pseudographics in them disappeared over time. As a result, a whole group arose that, in essence, were still extended versions of Asuka (one character of text is encoded with just one byte of information), but without the use of pseudographic symbols. They belonged to the so-called ANSI encodings, which were developed by the American Standards Institute. In common parlance, the name Cyrillic was also used for the version with Russian language support. An example of this would be Windows 1251 . It differed favorably from the previously used CP866 and KOI8-R in that the place of pseudographic symbols in it was taken by the missing symbols of Russian typography (except for the accent mark), as well as symbols used in Slavic languages ​​close to Russian (Ukrainian, Belarusian, etc.). ): Text encoding ASCII (Windows 1251, CP866, KOI8-R) and Unicode (UTF 8, 16, 32) - how to fix the problem with crackers - 6Due to such an abundance of encodings of the Russian language, font manufacturers and software manufacturers constantly had headaches, and you and I, dear readers, often got into trouble with those same notorious bugs when there was confusion with the version used in the text. Very often they came out when sending and receiving messages by e-mail, which entailed the creation of very complex conversion tables, which, in fact, could not solve this problem fundamentally, and often users used transliteration of Latin letters for correspondence in order to avoid the notorious gibberish when using Russian encodings like CP866, KOI8-R or Windows 1251. In fact, the cracks appearing instead of Russian text were the result of incorrect use of the encoding of a given language, which did not correspond to the one in which the text message was originally encoded. Let’s say that if you try to display characters encoded using CP866 using the Windows 1251 code table, then these same gibberish (a meaningless set of characters) will come out, completely replacing the text of the message. Text encoding ASCII (Windows 1251, CP866, KOI8-R) and Unicode (UTF 8, 16, 32) - how to fix the problem with crackers - 7A similar situation very often arises when creating and setting up websites, forums or blogs, when text with Russian characters is mistakenly saved in the wrong encoding that is used on the site by default, or in the wrong text editor, which adds an invisible gag to the code with the naked eye. In the end, many people got tired of this situation with a lot of encodings and constantly creeping out crap, and the prerequisites appeared for the creation of a new universal variation that would replace all existing ones and solve the problem with the appearance of unreadable texts. In addition, there was the problem of languages ​​like Chinese, where there were much more language characters than 256.

Unicode - universal encodings UTF 8, 16 and 32

These thousands of characters of the Southeast Asian language group could not possibly be described in one byte of information that was allocated for encoding characters in extended versions of ASCII. As a result, a consortium called Unicode (Unicode Consortium) was created with the cooperation of many IT industry leaders (those who produce software, who encode hardware, who create fonts) who were interested in the emergence of a universal text encoding. The first variation released under the auspices of the Unicode Consortium was UTF 32 . The number in the encoding name means the number of bits that are used to encode one character. 32 bits equal 4 bytes of information that will be needed to encode one single character in the new universal UTF encoding. As a result, the same file with text encoded in the extended version of ASCII and in UTF-32, in the latter case, will have a size (weigh) four times larger. This is bad, but now we have the opportunity to encode using UTF a number of characters equal to two to the thirty-second power ( billions of characters that will cover any really necessary value with a colossal margin). But many countries with languages ​​of the European group did not need to use such a huge number of characters in encoding at all, however, when using UTF-32, they for no reason received a fourfold increase in the weight of text documents, and as a result, an increase in the volume of Internet traffic and volume of stored data. This is a lot, and no one could afford such waste. As a result of the development of Unicode, UTF-16 appeared , which turned out to be so successful that it was adopted by default as the base space for all the characters that we use. It uses two bytes to encode one character. Let's see how this thing looks. In the Windows operating system, you can follow the path “Start” - “Programs” - “Accessories” - “System Tools” - “Character Table”. As a result, a table will open with the vector shapes of all the fonts installed on your system. If you select the Unicode character set in the “Advanced options”, you will be able to see for each font separately the entire range of characters included in it. By the way, by clicking on any of them, you can see its two-byte code in UTF-16 format , consisting of four hexadecimal digits: Text encoding ASCII (Windows 1251, CP866, KOI8-R) and Unicode (UTF 8, 16, 32) - how to fix the problem with crackers - 8How many characters can be encoded in UTF-16 using 16 bits? 65,536 (two to the power of sixteen), and this is the number that was adopted as the base space in Unicode. In addition, there are ways to encode about two million characters using it, but they were limited to an expanded space of a million characters of text. But even this successful version of the Unicode encoding did not bring much satisfaction to those who wrote, say, programs only in English, because after the transition from the extended version of ASCII to UTF-16, the weight of documents doubled (one byte per character in Aski and two bytes for the same character in YUTF-16). It was precisely to satisfy everyone and everything in the Unicode consortium that it was decided to come up with a variable length encoding . It was called UTF-8. Despite the eight in the name, it actually has a variable length, i.e. Each character of text can be encoded into a sequence of one to six bytes in length. In practice, UTF-8 only uses the range from one to four bytes, because beyond four bytes of code it is no longer even theoretically possible to imagine anything. All Latin characters in it are encoded into one byte, just like in the good old ASCII. What is noteworthy is that in the case of encoding only the Latin alphabet, even those programs that do not understand Unicode will still read what is encoded in YTF-8. That is, the basic part of Asuka was simply transferred to this brainchild of the Unicode consortium. Cyrillic characters in UTF-8 are encoded in two bytes, and, for example, Georgian characters are encoded in three bytes. The Unicode Consortium, after creating UTF 16 and 8, solved the main problem - now we have a single code space in our fonts . And now their manufacturers can only fill it with vector forms of text characters based on their strengths and capabilities. In the “Character Table” above you can see that different fonts support different numbers of characters. Some Unicode-rich fonts can be quite heavy. But now they differ not in the fact that they were created for different encodings, but in the fact that the font manufacturer has filled or not completely filled the single code space with certain vector forms.

Crazy words instead of Russian letters - how to fix it

Let's now see how krakozyabrs appear instead of text or, in other words, how the correct encoding for Russian text is selected. Actually, it is set in the program in which you create or edit this very text, or code using text fragments. To edit and create text files, I personally use a very good, in my opinion, Html and PHP editor Notepad++ . However, it can highlight the syntax of hundreds of other programming and markup languages, and also has the ability to be extended using plugins. Read a detailed review of this wonderful program at the link provided. In the top menu of Notepad++ there is an item “Encodings”, where you will have the opportunity to convert an existing option to the one that is used on your site by default: Text encoding ASCII (Windows 1251, CP866, KOI8-R) and Unicode (UTF 8, 16, 32) - how to fix the problem with crackers - 9In the case of a site on Joomla 1.5 and higher, as well as in the case of a blog on WordPress, you should avoid the appearance Krakozyabrov choose the UTF 8 option without BOM . What is the BOM prefix? The fact is that when they were developing the YUTF-16 encoding, for some reason they decided to attach to it such a thing as the ability to write the character code both in direct sequence (for example, 0A15) and in reverse (150A). And in order for programs to understand in what sequence to read the codes, the BOM (Byte Order Mark or, in other words, signature) was invented, which was expressed in adding three additional bytes to the very beginning of the documents. In the UTF-8 encoding, no BOMs were provided for in the Unicode consortium, and therefore adding a signature (those notorious extra three bytes at the beginning of the document) simply prevents some programs from reading the code. Therefore, when saving files in UTF, we must always select the option without BOM (without signature). Thus, you will protect yourself in advance from the crawling out of krakozyabrs . What is noteworthy is that some programs in Windows cannot do this (they cannot save text in UTF-8 without a BOM), for example, the same notorious Windows Notepad. It saves the document in UTF-8, but still adds the signature (three extra bytes) to the beginning of it. Moreover, these bytes will always be the same - read the code in direct sequence. But on servers, because of this little thing, a problem can arise - crooks will come out. Therefore, do not use regular Windows notepad under any circumstances.to edit documents on your site if you don’t want any cracks to appear. I consider the already mentioned Notepad++ editor to be the best and simplest option, which has practically no drawbacks and consists only of advantages. In Notepad++, when you select an encoding, you will have the option to convert text to UCS-2 encoding, which is very close in nature to the Unicode standard. Also in Notepad it will be possible to encode text in ANSI, i.e. in relation to the Russian language, this will be Windows 1251, which we have already described just above. Where does this information come from? It is registered in the registry of your Windows operating system - which encoding to choose in the case of ANSI, which to choose in the case of OEM (for the Russian language it will be CP866). If you set another default language on your computer, then these encodings will be replaced with similar ones from the ANSI or OEM category for that same language. After you save the document in Notepad++ in the encoding you need or open the document from the site for editing, you will be able to see its name in the lower right corner of the editor: Text encoding ASCII (Windows 1251, CP866, KOI8-R) and Unicode (UTF 8, 16, 32) - how to fix the problem with crackers - 10To avoid confusion , in addition to the steps described above, it will be useful to write the source code in its header all pages of the site information about this very encoding, so that there is no confusion on the server or local host. In general, all hypertext markup languages ​​except Html use a special xml declaration, which specifies the text encoding.
<?xml version="1.0" encoding="windows-1251"?>
Before parsing the code, the browser knows which version is being used and how exactly it needs to interpret the character codes of that language. But what is noteworthy is that if you save the document in the default Unicode, this xml declaration can be omitted (the encoding will be considered UTF-8 if there is no BOM or UTF-16 if there is a BOM). In the case of an HTML document, the Meta element is used to indicate the encoding , which is placed between the opening and closing Head tags:
<head>
...
<meta charset="utf-8">
...
</head>
This entry is quite different from the standard in Html 4.01, but fully complies with the Html 5 standard, and it will be understood correctly by any browsers currently used. In theory, the Meta element indicating the encoding of the Html document would be better placed as high as possible in the document header , so that by the time the text encounters the first character not from the basic ANSI (which is always read correctly and in any variation), the browser should already have information about how interpret the codes of these characters. Link to the original source: ASCII text encoding (Windows 1251, CP866, KOI8-R) and Unicode (UTF 8, 16, 32) - how to fix the problem with crackers
Comments
TO VIEW ALL COMMENTS OR TO MAKE A COMMENT,
GO TO FULL VERSION