# Ivars Peterson's MathTrek

February 12, 2007

### Divide-and-Conquer Multiplication

Most people know just one way to multiply two large numbers by hand. Typically, they learned it in elementary school. They're often surprised to find that there are a variety of ways to do multiplications, and each such algorithm has advantages and disadvantages. Moreover, grade-school multiplication can be far from the best method available in certain contexts.

Slight differences in the efficiency of multiplication algorithms can make a huge difference when calculators or computers do the work. Computers worldwide perform enormous numbers of multiplications each day. In most computers, each operation consumes mere microseconds, but multiplied by the number of computations performed, the differences in time taken can be significant. So, the general question of how quickly two n-bit numbers can be multiplied has not only great theoretical importance but also practical relevance.

Indeed, when it comes to multiplying two numbers, the best (or fastest) way to do it is often far from obvious.

One particularly intriguing and efficient multiplication algorithm was developed in the late 1950s by Anatolii Alexeevich Karatsuba, now at the Steklov Institute of Mathematics in Moscow.

Karatsuba's "divide-and-conquer" multiplication algorithm has its roots in a method that Carl Friedrich Gauss (1777–1855) introduced involving the multiplication of complex numbers.

A complex number is an expression of the form a + bi, where a and b are real numbers, and i has the property that i2 = –1. The real number a is called the real part of the complex number, and the real number b is the imaginary part. When the imaginary part b is 0, the complex number is just the real number a.

Suppose that you want to multiply two complex numbers, a + bi and c + di. To do so, you use the following rule:

(a + bi)(c + di) = [acbd] + [ad + bc]i.

For example: (2 + 3i)(1 + 2i) = [2 – 6] + [4 + 3]i = (–4 + 7i).

Expressed in terms of a program, you would input a, b, c, and d and output acbd and ad + bc.

Computationally, multiplying two digits is much more costly than is adding two digits. Suppose then that multiplying two real numbers costs \$1 and adding them costs a penny. To obtain acbd and ad + bc requires four multiplications and two additions, for a total of \$4.02.

Is there a cheaper way to obtain the output from the input? The Gauss optimization algorithm offers an alternative approach. Here's how the computation can be done for \$3.05, with three multiplications and five additions.

x1 = a + b
x2 = c + d
x3 = x1 x2 = ac + ad + bc + bd
x4 = ac
x5 = bd
x6 = x4x5 = acbd
x7 = x3x4x5 = bc + ad

So, Gauss optimization saves one multiplication out of four.

Karatsuba's divide-and-conquer multiplication algorithm takes advantage of this saving.

Consider a multiplication algorithm that parallels the way multiplication of complex numbers works. Roughly speaking, the idea is to divide a given multiplication problem into smaller subproblems, solve them recursively, then glue the subproblem answers together to obtain the answer to the larger problem.

Suppose that you want to compute 12345678 * 21394276.

Break each number into two 4-digit parts.

a = 1234, b = 5678, c = 2139, d = 4276

Find the products bd, bc, ad, and ac.

bd = 5678 * 4276
bc = 5678 * 2139
ac = 1234 * 2139

Break each of the resulting numbers into 2-digit parts, Repeat the calculations with these parts. For example, break 1234 * 2139 into 12, 34, 21, and 39. Find the appropriate products.

12 * 21, 12 * 39, 34 * 21, 34 * 39

Repeat the same steps with these 2-digit numbers, breaking each one into 1-digit parts and finding the appropriate products. For example, 12 * 21 gives the multiplications 1 * 2, 1 * 1, 2 * 2, and 2 * 1.

In this divide-and-conquer algorithm, given two decimal numbers x and y, where x = a10n/2 + b and y = c10n/2 + d, you have xy = ac10n + (ad + bc)10n/2 + bd.

So, for n = 2, ac = 1 * 2 = 2, ad = 1 * 1 = 1, bc = 2 * 2 = 4, bd = 2 * 1 = 2, you obtain:

12 * 21 = 2 * 102 + (1 + 4)101 + 2 = 252

Similarly, for 12 * 39, you get:

1 * 3 = 3, 1 * 9 = 9, 2 * 3 = 6, 2 * 9 = 18
3 * 102 + 9 * 101 + 6 * 101 + 18 * 1 = 468

You get 12 * 21 = 252, 12 * 39 = 468, 34 * 21 = 714, 34 * 39 = 1326.

This allows you to compute 1234 * 2139.

252 * 104 + 468 * 102 + 714 * 102 + 1326 * 1 = 2639526.

Similarly, you obtain:

1234 * 4276 = 5276584
5678 * 2139 = 12145242
5678 * 4276 = 24279128

Hence, 12345678 * 21394276
= 2639526 * 108 + 5276584 * 104 + 12145242 * 104 + 24279128 * 1
= 264126842539128.

Karatsuba's insight was to apply Gauss optimization to this divide-conquer-and-glue approach, replacing some multiplications with extra additions. For large numbers, decimal or binary, Karatsuba's algorithm is remarkably efficient. It's the sort of thing that your computer might be doing behind the scenes to get an answer to you a split second faster.

It's not the fastest known multiplication algorithm, but it's a significant improvement on grade-school multiplication. Indeed, properly implemented, Karatsuba multiplication beats grade-school multiplication even for 16-digit numbers and is way better for 32-digit numbers.

And we probably haven't heard that last word on multiplication. More innovations in computer arithmetic may yet lie ahead.

References:

Karatsuba, A.A. 1995. The complexity of computations. Proceedings of the Steklov Institute of Mathematics 211:169-183.

For additional information about Karatsuba multiplication, its efficiency, and various implementations, see http://mathworld.wolfram.com/KaratsubaMultiplication.html, http://www.cs.cmu.edu/~cburch/251/karat/, http://ozark.hendrix.edu/~burch/proj/karat/index.html, http://numbers.computation.free.fr/Constants/Algorithms/representation.html, http://en.wikipedia.org/wiki/Karatsuba_algorithm, and http://www.math.niu.edu/~rusin/known-math/99/karatsuba.