Домой United States USA — software Number Representations in Computer Hardware

Number Representations in Computer Hardware

168
0
ПОДЕЛИТЬСЯ

Welcome to a fundamental area of computer design: how numbers are represented in hardware! As this can be the difference between a blazing fast machine and a.
Welcome to arguably the most fundamental component of computer design: how numbers are represented in hardware!
We all know that modern computers operate on binary numbers and are extremely efficient at doing so. But this was not always the case. What’s more, many tech giants today, including Microsoft, Nvidia, Intel, Arm, and Tesla, are revisiting how they encode numbers in hardware to squeeze out every last bit of performance.
As Shakespeare would have it, «that which we call a rose by any other name would smell just as sweet.» But in the world of computers, the way we represent a number in hardware can mean the difference between a blazing fast machine and a costly $475 million bug – Pentium 4, anyone?
But we are getting a little ahead of ourselves. In this article, we will take a look at how this all came to be.
From the early inception of binary numbers to the modern world of floating point, this seemingly simple concept can become quite complex. So, let’s start from the beginning…
When we first learn about numbers in grade school, we typically begin with natural numbers (1, 2, 3, 4.). Natural numbers are used in all sorts of day-to-day situations, from counting items to monetary transactions, and a multitude of ways in-between. Eventually, we learn about the concept of zero, and over time get introduced to more advanced concepts such as negative numbers, complex numbers, and algebraic variables.
The ability to perform computations on numbers expands their utility beyond just counting things. Simple transaction-based computations use addition and subtraction; multiplication and division can be leveraged to speed up basic arithmetic; and eventually, complex equations and algorithms can help solve unknowns.
Basic numbers and mathematics might be easy for a human to grasp, but how would a machine do all of this and potentially do it even faster than a human? This was precisely the question Gottfried Leibniz spent his life trying to answer back in the 1600s.
Leibniz (1646-1716) was a German polymath active in law, philosophy, mathematics, languages, science, and theology. In the field of mathematics, he is most famous for his independent invention of calculus alongside Isaac Newton. His invention of binary arithmetic and hexadecimal notation went unnoticed for centuries until it eventually laid the foundation for today’s world of digital computing and communication.
When he wasn’t inventing calculus or engaged in his many intellectual endeavors, Leibniz was consumed with finding a way to perform computations quickly. He did not want to «waste» time performing «simple» operations such as addition and subtraction and was convinced that there must be a way to distill information into a very basic form for quick math.
A deeply religious man living in the Holy Roman Empire, Leibniz believed that numbers and math were divinely inspired and was determined to find a way to connect the two. In 1679, he developed a number system in a manuscript called «On the Binary Progression» to represent numbers using just 0s and 1s.
While he was able to represent numbers in a «simple» manner using binary notation, he found binary calculations to be «longer, albeit easier.» Fast forward to the 20th century, and this would actually become the fundamental tenet for binary computers.
Technically speaking, Leibniz devised a way to represent any decimal number (that is, a base 10 number, which humans typically use) as a binary number (base 2), where each bit represents a power of two.
For example, the decimal number 5 can be represented in binary as 101, with the rightmost bit representing 2^0 (= 1), the middle bit representing 2^1 (= 2), and the leftmost bit representing 2^2 (= 4).
Using this formulation, you can represent any decimal number, as the table above shows. Furthermore, you can introduce a binary point (we can’t just call them decimal points now, can we?) and represent fractions.
Mathematically, this is akin to using negative exponent values. The decimal number 0.6875 can be represented in binary as 0.1011, with the rightmost bit representing 2^-4 (= 0.0625).
Leibniz revisited binary numbers about 20 years later, in 1697, during a discussion with Duke Rudolph of Brunswick and Luneburg, who made the connection between binary numbers and the concept of creation ex nihilo, according to which all things were created from nothing by the one God.
Excited by the revelation (in addition to even more «proof» of divine representation of numbers from Christian missionaries in China learning about Yin and Yang’s binary nature), Leibniz was spent the rest of his life working to convince the public about his discovery.
Although his theological connection never gained traction with the public, he did release many manuscripts on interesting phenomena related to using binary to represent natural numbers.
For example, Leibniz noted an interesting property of geometric progression (e.g., 1, 2, 4, 8, 16, 32, .) that was at the heart of binary numeration: the sum of any three consecutive terms is always divisible by 7.
This, along with a multitude of «random» discoveries that Leibniz came across, helped convince him about the importance of binary representation, but it never actually took off as a way to do real math until the 20th century and the digital revolution stumbled upon it.
During these years, Leibniz also thought about other number formats such as base 12 and 16, in an effort to address the «longer, albeit easier» nature of binary, mathematically. His discovery of hexadecimal was the first to introduce the letters a, b, c, d, e, and f to represent 10, 11, 12, 13, 14, and 15, which we today see in many applications.
As a quick primer, our «natural» way of using numbers in everyday interactions uses base 10. This essentially means that we have 10 symbols (0, 1, 2, ., 8, and 9), and once we run out of symbols, we reuse the symbols in the next «place» to keep counting. With this methodology, we can encode any arbitrary value using our set of predetermined symbols.
In the binary system, there exists only two symbols: 0 and 1. Otherwise, the methodology holds the same to the decimal system: 0 is encoded as 0, 1 is encoded as 1, and then 2 is encoded as 10 (since we «ran out» of symbols). As Leibniz said, this is technically very simple, but will result in more «digits» for numbers. But, looking ahead to the invention of the transistor in the 20th century, the binary system naturally lends itself to the on/off nature of a switch.
While the conversion of numbers between decimal and binary isn’t too complex, performing computations in binary (for a human) can get a bit unwieldy and is error-prone, given the many digits of the encoding format. An entire field intersecting between math and computer science was created to better grasp the nature of computing with zeros and ones.
While Leibniz might have introduced the notion of binary numbers, George Boole (after which Boolean Algebra is named) went about formalizing how computations can be performed using just 0s and 1s. Think of this as the «discovery» of how to do long multiplication (for efficiency) after learning about repeated addition, allowing generalization and scalability of binary numbers.
In 1847, Boole published a paper called, «The Mathematical Analysis of Logic», describing how an ON-OFF approach can form the three most basic operations in digital logic: AND, OR, and NOT. With just these three operations, Boolean operators allow for a foundation to use binary to process information. Today, we find these three operators everywhere inside our digital machines, essentially forming the Arithmetic Logical Unit (ALU) in modern day processors and many instructions of an Instruction Set Architecture (ISA).
While this is all great, one of the fundamental limitations of binary numbers is how much information can they represent?
Let’s explain this by example: if we have a single bit, representing 0 or 1, we can encode a total of 2 different things. That is, we can map the value of «0» to represent a unique object, and map the value of «1» for another object. Increasing the number of bits to 2, and we now have a combination of 00, 01, 10, and 11, or a total of 2^2 = 4 things that can be represented.
This pattern continues exponentially: if you have 8 bits (or a byte), you can represent up to 2^8 = 256 unique things. And of course, with 32 bits, you can represent up to 4,294,967,296 unique things.
What are these «things»? Well, in the field of numerics, it means you can «only» represent a little above 4 billion unique numbers with 32 bits. This limitation turns into a hardware problem, since numbers are fundamentally limitless and infinite.
Thus, how do you go about representing an infinite set of numbers (including integers, fractions, negatives, and perhaps «special» numbers like infinity) efficiently in hardware? Herein lies the fundamental idea behind hardware number representations.
Numbers are infinite in nature. Mathematically speaking, this means that it is impossible to represent in hardware every single number from the largest exponents to the smallest decimals. Thus, an essential question a processor designer needs to grapple with is, «Which numbers can/should the hardware support?»
From an information theory perspective, the closely related question of, «How many numbers can be represented?» is tied to the number of bits available. This is a practical question that can be answered by the designer, especially during the early microprocessor days when resources were at a premium.
Going back to our example above: suppose you choose to represent numbers using 8 bits. That means you can represent up to 2^8 unique numbers, or 256 numbers. Which two-hundred and fifty-six number you choose to represent is a different question.
Furthermore, what do you do with the end points? In the last example, do you choose to represent 0 or 1? You don’t have enough bits to represent both! With 8-bits, you can represent up to 256 unique values from 0000 0000 to 1111 1111. If you start mapping them at 0 (for 0000 0000), then you can only go up to 255/256 = 0.99609375, and you have no spare representations for the value «1»!
Another challenge is how do you handle «weird» situations, such as division by zero? In the hardware, do you want that to be represented as «infinity»? Or maybe reserve a bit representation for «Not-a-Number (NaN)»? Which unique bit sequence do you set aside for these «denormals»?
Red points above are «denormal» in IEEE-754.

Continue reading...