## Thought this was cool: What are the most efficient algorithms to do basic mathematical operations (like addition, multiplication, etc) in a computer code?

The below explanation is assuming you store your number in some kind of digit form. That is, if you used base 10 you would store 123 as [1, 2, 3].

Addition and subtraction are easy to implement. For addition you just add the digits at corresponding positions. Any time a digit sums larger than the base you subtract out the base and add 1 to the next larger digit. Easy! Subtraction is a little bit trickier, as you need to implement borrowing, but basically it’s the same thing.

When working with multiplication, what algorithm you should use is usually pretty dependent on the input sizes and how fast your particular implementation is. The idea is that we can express an integer as a polynomial (b_n * x^n + b_n-1 * x^(n-1) + … + b_0) where x=2 (or whatever base you choose to represent your integer in) and then multiply numbers becomes the same as multiplying polynomials. Multiplying polynomials, of course, takes us back to the convolution problem.

The Karatsuba algorithm was the first sub quadratic algorithm to solve the convolution problem. It’s pretty easy to understand and implement so it still is worth taking a look at today. However this algorithm was eventually superseded by FFT based methods. The idea is that convolution before an FFT becomes multiplication of each term afterwards. Multiplying a bunch of (small) numbers together is easy so the real challenge is computing the FFT, and computing it accurately.

The most commonly used algorithm for large numbers is the Schönhage–Strassen algorithm although Fürer’s algorithm is asymptotically faster. Both methods work by computing some FFT.

Now division is a little tricky. But luckily, division can be reduced to multiplication with the use of Barrett Reduction and some cleverness. So as it turns out, division has the same asymptotic complexity as multiplication.

And exponentiation is best implemented using the standard Exponentiation by squaring technique. The very last square you compute will dominate the runtime of this algorithm (assuming you’re not reducing by some modulus). While this algorithm is pretty efficient in terms of the size of your output, you should know that x^k has output size O(k log(x)).

from Cosmin Negruseri on Quora: http://www.quora.com/Computer-Science/What-are-the-most-efficient-algorithms-to-do-basic-mathematical-operations-like-addition-multiplication-etc-in-a-computer-code/answer/Mark-Gordon-6

## 发表评论