To start with, I'm going to ignore negative numbers. I'll get to those later, but for now, for simplicity's sake, I'm going to assume the first number will always be larger than the second, so the answer will always be positive.

Let's start again with 1-bit numbers. 1-1=0, 1-0=1, 0-0=0, so we have this truth table:

X | Y | Z | |
---|---|---|---|

0 | 0 | 0 | |

0 | 1 | ? | |

1 | 0 | 1 | |

1 | 1 | 0 |

The question mark is to denote a value that can be either 1 or 0. We don't care about that particular value, because it would be negative. Because of that value, there are two functions that satisfy this truth table: X XOR Y, and X AND ¬Y.

Extending this to multiple digits is similar to extending addition to multiple bits, but instead of sometimes needing to carry, we'll sometimes need to borrow. The difference between carrying and borrowing is that instead of adding one to the next digit to the left, you subtract one from the next digit to the left, and you borrow in cases where the result would be less than 0, instead of greater than 1.

X | Y | B_{N-1} | Z | B_{N} | |
---|---|---|---|---|---|

0 | 0 | 0 | 0 | 0 | |

0 | 0 | 1 | 1 | 1 | |

0 | 1 | 0 | 1 | 1 | |

0 | 1 | 1 | 0 | 1 | |

1 | 0 | 0 | 1 | 0 | |

1 | 0 | 1 | 0 | 0 | |

1 | 1 | 0 | 0 | 0 | |

1 | 1 | 1 | 1 | 1 |

B

_{N-1}indicates whether a borrow was required for the digit to the right of the current one, and B

_{N}indicates whether the current digit requires a borrow. For the rightmost digit, B

_{N-1}is 0.

So, Z = X XOR Y XOR B

_{N-1}, and B

_{N}= (¬X AND Y) OR (¬X AND B

_{N-1}) OR (Y AND B

_{N-1})

Now, what about negative numbers? For that matter, how do we even represent negative numbers? Computers only have 1 and 0, there's no -.

Well, there are a number of ways you could do that. You could for example designate one bit to be a sign bit, where one value indicates positive, and the other negative. Normally, the sign bit is the leftmost digit, and normally 0 indicates positive. So, 0101 would be 5, while 1101 would be -5.

This particular method has the disadvantage of wasting a number, because 000 and 100 represent 0 and -0, which are equal. It also requires different algorithms for adding and subtracting negative numbers.

Wouldn't it be nice if there were a representation of negative numbers that allowed the use of the same algorithms for both positive and negative numbers? Well, there is one, it's called two's complement.

It's not very intuitive, but it's very convenient for doing these bitwise functions. Like the method I just described, the leftmost digit is a sign bit, and 0 represents positive. But instead of the rest of the bits being the same as the positive representation, you flip all the bits, and then add 1.

0101 would still be 5. To get the bits for -5, first start by replacing every 1 with a 0, and every 0 with a 1, so we get 1010. Then add 1, so we get 1011. If the rightmost digit is 1, it carries the way you would expect, so -3 ⇒ 0100 ⇒ 1011 ⇒ 1100.

Notice what happens when you try to represent -0 like before. 0000 ⇒ 1111 ⇒ 0000. So, there's only one possible representation of 0. 1000 would represent -8. In general, with N bits, the highest number you can represent is 2

^{N-1}-1, and the lowest is -2

^{N-1}

When you use two's complement, you can use exactly the same algorithm I described above to subtract with both positive and negative numbers. You can also add using the same algorithm I described in my previous post.