Linear Algebra and Fourier Series

Prakhar
4 min readJun 20, 2021

You may be wondering from the title that how do these different topics of math relate? If you are from an engineering background the way Fourier series is taught is just as a tool you are just given formulas and told to find out coefficients of different sines and cosines or exponentials the function is composed of, but here I will tell you a fact relating linear algebra and Fourier series that will blow your mind(at least mine was blown away) so let us start.

We shall start with vectors and vector spaces.

If a n-dimensional Vector space V has an orthonormal basis b

then if u ∈ V

where

are the relevant coefficients or lengths of projection of u on basis vectors b.

Now we know that the inner product or dot product of a vector w with a vector x is

we know that for two orthonormal vectors w and x

We have found a way to decompose a vector u into sum of scaled basis vectors b. Now lets try to generalize this idea to an infinite dimensional vector space.

It is easy to show that for a nonempty set X the set of functions {f ∣ f:X→R}

is a vector space under the operations

(f+g)(x):=f(x)+g(x) and (λf)(x):=λf(x).

The entire field of functional analysis concerns itself with vector spaces of functions. Since the powers of x, x⁰, x¹, x², x³, etc. are easily shown to be independent, we can always find a f for example

f:X →R

f=x^(n+1)

which is linearly independent of other polynomials of lesser degree or higher degree i.e. we can always increase n there is no limit to the number of linearly independent functions in our vector space of functions. It follows that no finite collection of functions can span the whole space and so the “vector space of all functions” is infinite dimensional. That is not quite the same as talking about “components” or an “infinite number of components”.

The dot product of finite dimensional vectors generalizes to inner product of functions. Sums in continuous case is replaced by integrals(think of it as a Riemann sum).

The functions sin x and cos x are orthogonal in this definition of inner product as Their inner product is zero:

The orthogonality goes beyond the two functions sin(x) and cos(x) to an infinite list of sines and cosines. The list contains (1, cos(x), sin(x), cos(2x), sin(2x), cos(3x),…..)

Every function in that list is orthogonal to every other function on that list.

We are ready for climax now, watch how the Fourier series pop out from our setup.

The Fourier series of a function is its expansion into sines and cosines:

Now as we know all the sines and cosines are orthogonal to each other with there vector lengths(for all sines and cosines)

For cos(0x)=1

So the orthonormal basis functions(vectors in function space) are:

Our Function f(x) here is sum of “projections” of the function into the orthogonal(or orthonormal in case of scaled basis functions) basis functions in infinite dimensional vector space of functions! analogous to how a vector is the sum of its projections into the orthonormal basis vectors in finite dimensional vector space!

We can find these coefficients of Fourier series by taking inner products of different basis with the function and due to orthonormality all other terms on the right disappear except the coefficient which we were finding. Much like how we took the dot product(inner product) in the case of finite vector u in the vector space V.

REFERENCES

Introduction to Linear Algebra, by Gilbert Strang

--

--