The last article talked about “how negative numbers are stored in computers”. After reading, we should have a basic understanding of the original code, the inverse code and the complement code.

Today, let’s talk about why we use complement to express negative numbers in computers?

First of all, we should be clear that the original code is convenient for people to see. Seeing the original code of a number, we can calculate the actual value of the number according to the sign bit and the following binary bit. For the sake of simplicity, I take a byte of 8 bits as an example, such as

```
//The original code of 1, the highest 0 represents a positive number
0000 0001
//The original code of - 1, the highest 1 represents a negative number
1000 0001
```

As you can see, the original code of 1 and – 1 is only different from the symbol bit. Then, think about a question, 1 – 1 =?

Yes, we can calculate it directly by subtraction and get 1-1 = 0. However, when doing subtraction operations, you may encounter the situation that there is not enough subtraction and you need to borrow, which is obviously more troublesome. Let’s change our thinking. 1-1 is equivalent to 1 + (- 1) in mathematics. In this way, it’s much easier to convert subtraction into addition, just consider carry. (in fact, there is only an adder in the computer and no subtracter, so subtraction is calculated by an adder.)

So, let’s see how much the original code of 1 and – 1 is equal to (we need to let the symbol bits also participate in the operation)

```
0000 0001
+ 1000 0001
1000 0010
```

The result is – 2, which is obviously not in line with our expectations.

In order to solve the problem of original code subtraction, there is an inverse code. Use the inverse code. Let’s calculate it again.

```
//1, the same as the original
0000 0001
//Negative code of - 1, symbol bit unchanged, others reversed
1111 1110
```

After adding, we get 1111111111, which is the inverse code. If we turn it into the original code, it will be – 0.

But there is another problem. In mathematics, 0 is zero. How to get there is – 0, + 0. According to the concept of the original code, the original code of + 0 is 0000 0000, and the original code of – 0 is 1000 0000. The problem lies in this. If we encounter the calculation of 0, should we use + 0 or – 0? This will lead to divergence. So, the complement appears, which solves the sign problem of 0.

```
//Complement of 1, the same as the original code
0000 0001
//Complement of - 1, inverse + 1
1111 1111
```

Add up to 10000 0000. After the highest carry, it exceeds 8 bits, and then it is rounded off to 0000 0000. This is a complement, and the original code is also 0000 0000. Isn’t that 0.

In this way, the difference between + 0 and – 0 in the original code is solved by using the complement 0000 0000 to represent 0, and the binary representation method of 0 is unified.

So, there’s another question. Where’s – 0? In fact, – 0 is 10 million, which is used to represent – 128. However, note that – 128 represents a complement to – 128, so – 128 has no source code or inverse code.

So why do you use 10 million for – 128?

First look at the original code, inverse code and complement code of – 127:

Original code: 1111 1111

Inverse: 10 million

Complement: 1000 0001

We know that – 127 – 1 = – 128 in mathematics, so – 127’s complement – 1 should also be equal to – 128’s complement, that is

1000 0001 -1 = 1000 0000。 So 10 million is a complement to – 128.

In a byte of 8 bits, if the original code is used to represent the size range of the value, it can only be 1111111 ~ 01111 1111, that is – 127 ~ 127. However, if complement is used, it can represent – 128 ~ 127, which is exactly 2 ^ 8256 numbers.

Therefore, – 0 can represent a minimum number. In 8 bit binary it is 10000000, in 32 bit, it is the minimum value of 1000000000000000 0000000000000000, int. (32-bit values range from – 2 ^ 31 to 2 ^ 31-1)

Conclusion: the existence of complement solves the symbol problem of 0, and unifies the addition and subtraction operation of computer.