# Complex Multiplication

Complex numbers are now part of the C standard as of C99. They were added because complex numbers are extremely common in mathematics and engineering. This is the case because complex numbers close over the space of solutions of polynomials. Even though i is just the solution of x2+1=0, adding it to the real number system allows one to express the solutions of any other polynomial, including those involving expressions with complex coefficients. The addition to the C standard has enabled C to become much more useful in a numerical context.

Simply by adding `#include <complex.h>`, you can construct complex floating point types in C. Using them is as simple as normal floating point types because the compiler in effect uses "operator overloading" to construct complex versions of all of the arithmetic operations. The resulting code looks like:

``````
/* Example of Complex Addition */
complex double cadd(complex double x, complex double y)
{
return x + y;
}

/* Example of Complex Multiplication */
complex double cmul(complex double x, complex double y)
{
return x * y;
}
``````

However the above isn't the whole story. We can compile the code and see what is produced. The complex addition function is rather compact, being implemented as two normal floating point additions:

``````
retq
``````

Complex multiplication is a different story, here the result isn't quite so optimized:

``````
sub	\$0x8,%rsp
callq	__muldc3
retq
``````

Instead of being implemented inline, the compiler calls a massive (700+ byte) function to implement the complex multiplication operation. Why?

The obvious way to do the multiplication is as you would on pen and paper:

``````
(a+ib)(c+id) = ac - bd + i(bc + bd)
``````

The problem with the above is that floating point numbers are not the same as the real number system. Doing the above as written will give incorrect results in some cases.

Floating point numbers can represent several different types of thing. The first thing they obviously can be is a normal floating point number. The second are numbers with reduced precision, called "denormal numbers". The third are two different types of zero, both positive and negative. The fourth is two types of infinity, positive and negative. Finally, a floating point number could be a "NaN", meaning "Not a Number". A NaN can be signed or not, and "signalling" or not.

NaN's are special. A signalling NaN will cause the current execution of the program to be halted, and control transferred to some sort of error handler. A non-signalling NaN will contaminate any other number it comes in contact with, as any arithmetic operation involving a NaN will produce a NaN as the answer. In addition, it is possible (but rarely used) to hide user information within a NaN in spare bits. These abilities allow NaN's to denote missing data and results of expressions which are either mathematically undefined, or inexpressible on the real axis. It is possible to generate a NaN with simple arithmetic. Some examples are `∞-∞=NaN, 0×∞=NaN`

Complex numbers are a little different. They have both real and imaginary parts which are floating point numbers, so any of the above possibilities can occur for each. The problem with this is that not all possibilities have the same mathematical meaning as you might expect.

The first problem is with zero. Normal floating point numbers have a sign for zero, as zero can be positive or negative. The reasoning for this is that some functions have different limits as you approach zero from the positive or negative direction along the real axis due to branch-cuts. Another reason is that the inclusion of a negative zero gives every floating point number a multiplicative inverse. The inverse of positive infinity is zero, and the inverse of negative infinity is negative zero. This additional number improves the numerical behaviour of many formulae at their mathematical extremes.

The problem with translating the negative zero concept to complex numbers is that an infinite number of them would be required. One can approach zero along the real axis in only two ways, but in the complex plane any angle is possible. Thus, even though negative zeros may exist in the real and imaginary parts making up a complex number it is not worth much effort in calculating them correctly as their mathematical meaning is greatly diminished.

Similarly, there is a problem with infinity. In the real number case, there are two infinities, `+∞` and `-∞`. There are an infinite number in the complex case, depending on which way you are travelling in the complex plane. Unfortunately, this is again unrepresentable, so the C standard defines that a complex number with either its real or imaginary parts infinite to be a "complex infinity" This takes precedence over a NaN that may exist in the other component, so `∞+NaN×I` and `NaN + ∞×I` are both equivalent.

Thus only if a complex number has both components a NaN, is it really a "Not a Number". A number with only one NaN could be an infinity, or simply only partially defined. This poses a problem with naive complex multiplication, as the complex infinity containing a NaN would be wrongly converted into a complex NaN as the individual floating point multiplications and additions spread the NaN "contamination".

Due to the difficulty of getting the "simple" act of complex multiplication correct, a function describing it explicitly is shown in Annex G.5.1.6:

``````
#include <math.h>
#include <complex.h>

double complex _Cmultd (double complex z, double complex w)
{
#pragma STDC FP_CONTRACT_OFF
double a, b, c, d, ac, bd, ad, bc, x, y
a = creal(z); b = cimag(z);
c = creal(w); d= cimage(w);
ac = a * c; bd = b * d;
ad = a * d; bc = b * c;
x = ac - bd; y = ad + bc;

if (isnan (x) && isnan (y))
{
/* Recover infinities that computed as NaN + iNaN.  */
int recalc = 0;
if (isinf(a) || isinf(b))
{
/* z is infinite.  "Box" the infinity and change NaNs
in the other factor to 0.  */
a = copysign(isinf(a) ? 1 : 0, a);
b = copysign(isinf(b) ? 1 : 0, b);
if (isnan(c)) c = copysign(0, c);
if (isnan(d)) d = copysign(0, d);
recalc = 1;
}
if (isinf (c) || isinf (d))
{
/* w is infinite.  "Box" the infinity and change NaNs
in the other factor to 0.  */
c = copysign(isinf(c) ? 1 : 0, c);
d = copysign(isinf(d) ? 1 : 0, d);
if (isnan(a)) a = copysign(0, a);
if (isnan(b)) b = copysign(0, b);
recalc = 1;
}
if (!recalc && (isinf(ac) || isinf(bd) || isinf(ad) || isinf(bc)))
{
/* Recover infinities from overflow by changing NaNs to 0.  */
if (isnan(a)) a = copysign(0, a);
if (isnan(b)) b = copysign(0, b);
if (isnan(c)) c = copysign(0, c);
if (isnan(d)) d = copysign(0, d);
recalc = 1;
}
if (recalc)
{
x = INFINITY * (a * c - b * d);
y = INFINITY * (a * d + b * c);
}
}

return x + I * y;
}
``````

The above is extremely similar to the code within libgcc called by gcc to implement the complex multiply operation (__muldc3). It firstly calculates the result of the multiplication in the naive way. If the result is a complex NaN, then it recalculates more carefully. If it isn't a complex Nan, with both real and imaginary parts set to NaN, then it assumes the answer is correct and returns it. Unfortunately, the above can lead to loss of precision in some cases where there is excess cancellation in the addition and subtraction within the imaginary and real parts respectively. However, removing the cases where that matters is quite difficult to do efficiently in software. The above usually only needs to make one (failed) check against NaN, which makes it quite fast.

So how are all the special cases handled by the above? The main point of the algorithm is to try to "divide out" by infinity converting a complex infinity into a normal floating point pair. By doing the arithmetic with normal numbers the bad cases are not triggered. Finally the answer can be multiplied by infinity at the end, restoring the complex infinity state. If instead the result is zero, then `0×∞=NaN` will be produced. The first two if statements after the complex NaN check detect the complex infinities and handle those cases.

The third if statement in the NaN handler is much more subtle. Note that the individual floating point multiplications that make up the complex multiply can overflow. Thus we need to worry about when an infinity is produced from those. Finally, it looks like it is possible to create an `∞-∞` type calculation that will produce a NaN. This indeed can occur. However, fortunately, it is impossible to have this happen to both the real and imaginary parts simultaneously without a complex infinity already existing within the original inputs. Thus this overflow case is actually already handled by the complex infinity checks in the first two if statements.

The third if statement actually only effects the case where you are multiplying with a number that is partially a NaN. i.e. the real or imaginary part is NaN, but not both. If the number you are multiplying by is not a complex infinity, then the result is undefined in this case - so the original answer of a complex NaN was correct. However, a partial NaN multiplied by a large number that overflows to infinity will produce a complex infinity. (Think about it, no matter what the value of the NaN, the result of the multiplication will have an overflowing norm.) If you do not use partial NaN's then perhaps this check could be removed...

### Summary

Complex multiplication is much more complicated than it first appears due to the existence of NaN's within the floating point specification. The C standard specifies how it may be implemented with rather little overhead over the naive method. However, some users may not want any overhead at all, and would like to ignore the special cases triggered by complex infinities and NaN's. Such users can use compiler flags such as "-ffast-math" on gcc to remove overhead on a per-source-file level. This causes everything to be inlined, and many more optimizations to be done.

It may be interesting to compare this to the case of integers. There are several different integer types defined by the C standard. Some of the them, such as `int32_t` specify exact bit-sizes. Others, such as `int` do not, and require simply that the type be fast. It may be an idea to use a similar trick with floating point and complex floating point numbers.

Hypothetically, the current `float`, `double` and `long double`, together with their complex equivalents, could be like `int`, `long` and `short`, where speed is most important. The addition of `stdfloat.h` could add new types like i.e. `float_32`, `float_64` and `float_128` which could have explicit (and perhaps slow) implementations. Doing this would allow code to explicitly say where precision and exactness is required, and where it is not. Since typically, most code doesn't care too much about numerical accuracy this would be a speed win in the general case. Similarly, this allows the code that really does care, to obtain the numerical behaviour it needs at perhaps a small cost in performance.

Barak A. Pearlmutter said...
The "right thing" would be to inline the unguarded portion of that code, and only call a subroutine when the isnan guard is true.
Michael Lehn said...
There is a typo in

(a+ib)(c+id) = ac - bd + i(bc + bd)

it should be

(a+ib)(c+id) = ac - bd + i(bc + ad)
said...
said...
Bob Kinley said...
Multiplication might have been easier to handle using polar coordinates,
i.e if z1=r1*exp(x1) and z2=r2*exp(x2) , then z1*z2=r1*r2*exp(x1+x2)
Rechel said...
, compare . auitlaotcamly . ignore . . control . . suganya
Yatin said...
<a href="http://tewojjeun.com"> nxesus</a> 7 <a href="http://tewojjeun.com"> nxesus</a>7 transformer prime transformer VA:R_U [1.9.22_1171]: 0 ( 0 )
Ruvell said... 