In this post, we will compute a classical determinant called the Vandermonde determinant. Though the computation of this determinant looks intractable at first, there turns out to be a beautiful formula for it, with a very neat proof. Somewhat surprisingly, the matrices involved and this result are related to the notion of Lagrange interpolating polynomials.
Main result
Take some coefficients (in
or
, or more generally in any field). The associated Vandermonde matrix is the
matrix given by
As we will see below, it naturally appears when we talk about polynomial interpolation, but less us just take it as a pretty object for now. There are plenty of things that we’d like to know about a matrix, one of them is its determinant. It is interesting to try and figure this out by hand first.
Use your favorite techniques to compute the determinant of when
and
. Do it for
if you are extremely motivated.
Did I say interesting? I meant painful. One would expect that there might be a pattern for the determinant of such a matrix, but it seems quite hard to see when computing things by hand. Yet, there is indeed a very nice formula. The trick is that it is only nice when it is shown in a factorized form, whereas usual techniques to compute determinants give it expanded. It is worth noting in passing that there is no technique that gives a factorized determinant in general, since such a technique would in particular allow to compute eigenvalues (and thus solve polynomial equations) explicitly. Regardless, here is the formula.
A few sanity checks are de rigueur here.
- If two of the coefficients
are equal, then the matrix has two equal rows, so it is not invertible, and the determinant is 0. On the other hand, the product has a factor
, so it is 0.
- If
, the matrix is just the constant 1, so
. On the other hand, the product is empty, so it equal to 1. It would be fair to argue that this is just a convention though.
Proof
There are many proofs of the formula, one of them involving classical row-reduction, but it is painful. The shortest and most pleasing one, in my opinion, uses polynomials. Consider indeed the matrix , but call the last coefficient
instead of
:
It changes absolutely nothing, but it changes everything! Indeed, we now see that the determinant is a function of , but even more, that it is a polynomial in
. To see this, it suffices to expand according to the last row. So let us call this polynomial
. Note the following points.
is a polynomial of degree
.
- If
, then the matrix has two equal rows, so its determinant is
, which means
.
- By expanding according to the last row, we see that the coefficient of
is the minor
that is, it is the determinant of order , namely
.
But this tells us everything about the polynomial! It has degree and roots
so it can be written
for some constant . But expanding shows that
is also the coefficient of
, so
. We conclude that
and the formula follows by induction. Isn’t this neat? The amount of computation that we had to do is essentially zero, unlike when trying “by hand”.
Recover the value of the constant by considering the constant coefficient of the polynomial
instead of the coefficient of
.
Application
On top of being a beautiful result, the formula of Theorem I.1. shows that the Vandermonde determinant is not zero when the coefficients are distinct. In particular, the matrix is then invertible. This means that the linear mapping that it corresponds to (in some given bases) is a bijection. Yet, if we think of the matrix as describing a map from
to
in its canonical basis, then there is nothing too exciting happening.
Let us instead think of polynomials. We all know that through two distinct points, there goes one and only one straight line (which is not vertical if the -coordinates are different). More generally, through three points goes one and only one parabola (or a straight line), once again as long as the
-coordinates of these points are different. Even more generally, we would hope that if we take
points
with all
distinct, then there is a unique polynomial
of degree
(or less) that goes through all these points, that is
If we search in the form
, then we want
But this is a system of equations, which, in matrix form, is exactly
that is
But since we know that is invertible, this means that this equation has a unique solution. This translates to the following result.
Assume that are all distinct. Then for any
, there exists a unique polynomial
of degree
such that
for all
. This polynomial is called the Lagrange interpolation polynomial at these points.
Interpolation polynomials are used in a lot of different areas of math and applications, for example for approximating shapes, in prediction, signal processing, and so on. The Lagrange polynomials are the most basic ones, but suffer from some issues like Runge’s phenomenon.
Of course, the previous result is purely theoretical and does not tell us about what such a polynomial would be. As it turns out, it is actually not too hard to write the interpolating polynomial explicitly. We summarize this in the following exercise.
Consider points
with the
being all distinct.
- Argue that there exists at most one polynomial
such that
for all
.
- Find this polynomial if all the
are zero.
- Find this polynomial if
and
for
.
- Finally, find this polynomial for general
.