Digital computers are, at heart, electrical. They think in terms of an electric charge, or no electric charge, that is two states, on or off, up or down, hot or cold, whatever you want to call them. Humans are more complicated than this; we experience a number of states, from freezing to boiling, comatose to frantic, and, in math, we use ten digits rather than two.
Digital computers, recognizing only two states, charged or not charged, represent these states as zero or one (0 or 1). That’s it, they’re too limited to recognize anything else. The zeroes and ones that float around the computer are what we call bits’, from binary digits’. So, how does a computer manage to talk intelligibly to us complicated humans? For instance, when a computer starts to count, it happens this way, with the left-hand number representing our decimal system, and the right-hand number the computer binary system.
1=1, 2=10, 3=11, 4=100, 5=101, 6=110, 7=111, 8=1000, 9=1001, 10=1010, and so on.
Now, this can quickly get out of hand. For example, decimal 100 is 1100100 to the computer, decimal 1,000 is 1111101000, and we’re not even considering the alphabet yet. So, clever humans came up with the idea of a hexadecimal system of notation. Harness four binary bits together, and you have a range of 0 thru 16. Go ahead, check it out, 0000 is decimal zero, and 1111 is decimal 16. This wasn’t really sufficient to represent a whole lot, so computer engineers made the basic building blocks of the computer two sets of four bits, i.e. ‘0000 0000′ thru 1111 1111’. I’ve separated the two halves for easy readability only.
This gives us a range of 0 thru 255, which means that 256 different meanings can be attached to this basic building block, which we call guess what a byte’. A byte is sometimes also called a word’, and a half-byte is sometimes called a nibble’. Of course, a computer word, or byte, is not a real word as we know language, but it does form the basis for representing real words and real numbers as humans understand them.
Since there are only 26 capital and 26 small letters in the Latin alphabet, which is what we use for English, and 10 integers in the decimal counting system, 256 different combinations leave plenty of room for representing foreign words and special characters. Chinese programmers usually program in English, and the Japanese string two bytes together to represent their alphabet. Graphics work differently, and are not dealt with in this article.
Now let’s go back to the bits, so that we can see how these codes are made up. PC’s usually use ASCII code, and mainframe computers use a code called EBCDIC that I’ll use as an example. Four bits represent sixteen states, as we’ve described before. We can arbitrarily attach numbers 0 thru 9, and letters A thru F to these states. Below is an example of what a nibble (half-byte) represents.
1111 – that’s the sixteenth value, which is F’. 1001 that’s the ninth value, which is 9′. Now we must take a further step to make the final result of our string of bits intelligible. A byte contains the bit value 1111 1001′, which, we’ve decided, is F9′. In EBCDIC, this code represents the decimal numeral 9′. F0′ through F9′ represent our integers. A byte containing binary value 1100 0001′ represents EBCDIC C1′ which, to the human waiting expectantly outside the computer, is translated as letter A’. EBCDIC C1′ thru C9′ represents, letters A thru I, D1′ thru D9′ represent letters J thru R, and E2′ thru E9′ represent S thru Z. 81′ thru 89′ represents a thru i, and so on.
So, we have a progression from bits to bytes in order to represent the alphabet, numbers, and symbols that we expect from a modern computer. Like the dot/dashes of the Morse Code, the humble 0 and 1 bit can be ordered and organized into stories and articles, and theses, and dissertations, chatter, and wisdom that we can all understand and relate to. Bits and bytes are the building blocks of computer information, and without them, our hard-working computers would do nothing but spew out an endless stream of zeroes and ones.