MATH002 Lagrange InterpolationMon, 27 Nov 2017 17:00:00 +0000

# Lagrange Interpolation

Lagrange interpolation is a method for constructing a polynomial function from a list of $$x, y$$ value pairs. The function passes through each point in that list. In this episode we'll see how to use the Lagrange interpolation for finding a polynomial function as an approximation to another given function.

## How it Works

Let $$f_{orig}(x)$$ be our original function that we want to approximate with a polynomial using Lagrange interpolation.

We sample $$f_{orig}$$ on the interval of interest. The number of samples is $$s = n + 1$$. The samples can be spaced equally or unequally over $$x$$; duplicates are not allowed. The result is a list of $$x, y$$ pairs $\begin{pmatrix} x_0 & y_0 \\ \vdots & \vdots \\ x_n & y_n \\ \end{pmatrix} .$

From the values $$x_i$$ we create the matrix $\mathbf{A} = \begin{pmatrix} x_0^n & x_0^{n-1} & \cdots & x_0 & 1 \\ x_1^n & x_1^{n-1} & \cdots & x_1 & 1 \\ \vdots & \vdots & & \vdots & \vdots \\ x_n^n & x_n^{n-1} & \cdots & x_n & 1 \\ \end{pmatrix} ,$ and from the values $$y_i$$ we create the vector $\mathbf{y} = \begin{pmatrix} y_0 \\ \vdots \\ y_n \\ \end{pmatrix} .$

Now we have to solve the matrix equation $\mathbf{A}\boldsymbol{\alpha} = \mathbf{y} ,$ e.g. using one of the standard methods like Gaussian elimination or with the QR decomposition. The vector $$\boldsymbol{\alpha}$$ contains the unknown coefficients of our polynomial: $\boldsymbol{\alpha} = \begin{pmatrix} a_n \\ \vdots \\ a_0 \\ \end{pmatrix} .$

Thus the approximating function we are looking for is $p(x) = a_n x^n + a_{n-1}x^{n-1} + \cdots + a_1 x + a_0 .$

## Warning

This interpolation method suffers from the problem that the matrix $$\mathbf{A}$$ might be badly conditioned. This is then likely to lead to big errors in the numerical solution for $$\boldsymbol{\alpha}$$.

Another issue arises with a growing number of samples: the polynomial function starts to oscillate between the sampled points. A possible counter measure is to reduce the number of samples and not to use equally spaced points but to (manually) tweak their positions until a satisfying solution is found.

## Code Example

In this example, the function we want to approximate is $f_{orig}(x) := \begin{cases} 2 - |x|, & \forall x\in (-2, 2) \\ \;\;\;\;\;\;\;\; 0, & otherwise \\ \end{cases} .$ We take 13 equally spaced samples on the interval $$[-3, 3]$$. As can be seen in the plot below, the resulting polynomial oscillates heavily, especially near the boundaries of the interval.

#include <iostream>
#include <Eigen/Dense>
#include <Eigen/QR>

Eigen::MatrixXd compute_A(Eigen::VectorXd const &x)
{
int const s = x.size();
int const n = s - 1;

Eigen::MatrixXd A;
A.resize(s, s);

for (int c = 0; c < n; ++c)
A.col(c) = x.col(0).array().pow(n - c);
A.col(n) = Eigen::VectorXd::Constant(s, 1);

return A;
}

Eigen::VectorXd compute_coefficients(Eigen::MatrixXd const &data)
{
Eigen::MatrixXd const A = compute_A(data.col(0));
Eigen::VectorXd const y = data.col(1);
return A.colPivHouseholderQr().solve(y); // alpha
}

Eigen::MatrixXd compute_samples(std::function<double(double)> const &f,
double const start, double const stepSize, int const samples)
{
Eigen::MatrixXd data;
data.resize(samples, 2);
for (int i = 0; i < samples; ++i) {
double const x = i * stepSize + start;
double const y = f(x);
data.row(i) << x, y;
}
return data;
}

int main()
{
auto const f_orig = [](double x) { return (-2. < x && x < 2.) ? 2. - abs(x) : .0; };
Eigen::MatrixXd samples = compute_samples(f_orig, -3., .5, 13);
std::cout << samples << std::endl;
/* output (x_i, y_i):
-3    0
-2.5    0
-2    0
-1.5  0.5
-1    1
-0.5  1.5
0    2
0.5  1.5
1    1
1.5  0.5
2    0
2.5    0
3    0
*/

std::cout << compute_coefficients(samples) << std::endl;
/* output (a_n ... a_0):
0.00206937
2.54083e-18
-0.0479248
-4.84711e-17
0.405326
7.71716e-16
-1.55719
-4.87826e-15
2.80873
8.23764e-15
-2.61101
-2.16536e-15
2
*/
}