Weierstrass Approximation Theorem

From ProofWiki
Jump to navigation Jump to search

Theorem

Let $f$ be a real function which is continuous on the closed interval $\Bbb I = \closedint a b$.


Then $f$ can be uniformly approximated on $\Bbb I$ by a polynomial function to any given degree of accuracy.


Complex Case

Let $\Bbb I = \closedint a b$ be a closed real interval.

Let $f: \Bbb I \to \C$ be a continuous complex function.

Let $\epsilon \in \R_{>0}$.


Then there exists a complex polynomial function $p : \Bbb I \to \C$ such that:

$\norm {p - f}_\infty < \epsilon$

where $\norm f_\infty$ denotes the supremum norm of $f$ on $\Bbb I$.


Proof 1

Let $\map f t: \Bbb I = \closedint a b \to \R$ be a continuous function.

Introduce $\map x t$ with a rescaled domain:

$\map f t \mapsto \map x {a + t \paren {b - a} } : \closedint a b \to \closedint 0 1$

From now on we will work with $x: \closedint 0 1 \to \R$, which is also continuous.

Let $n \in \N$.

For $t \in \closedint 0 1$ consider the Bernstein polynomial:

$\ds \map {B_n x} t = \sum_{k \mathop = 0}^n \map x {\frac k n} \binom n k t^k \paren {1 - t}^{n - k}$

For $t \in \closedint 0 1$, $0 \le k \le n$, let:

$\map {p_{n, k} } t := \dbinom n k t^k \paren {1 - t}^{n - k}$

By the binomial theorem:

$\ds \sum_{k \mathop = 0}^n \map {p_{n, k} } t = 1$


Lemma 1

$\ds \sum_{k \mathop = 0}^n k \map {p_{n, k} } t = n t$

$\Box$


Lemma 2

$\ds \sum_{k \mathop = 0}^n \paren {k - n t}^2 \map {p_{n, k} } t = n t \paren {1 - t}$

$\Box$


Now we construct the estimates.

\(\ds \sum_{k \mathop : \size {\frac k n \mathop - t} \ge \delta}^n \map {p_{n, k} } t\) \(\le\) \(\ds \sum_{k \mathop : \size {\frac k n \mathop - t} \ge \delta}^n \map {p_{n, k} } t \frac {\paren {k - n t}^2} {\delta^2 n^2}\) as $\dfrac {\paren {k - n t}^2} {\delta^2 n^2} \ge 1$
\(\ds \) \(\le\) \(\ds \frac 1 {n^2 \delta^2} \sum_{k \mathop = 0}^n \paren{k - n t}^2 \map {p_{n, k} } t\)
\(\ds \) \(=\) \(\ds \frac {t \paren {1 - t} } {n \delta^2}\) Lemma 2
\(\ds \) \(\le\) \(\ds \frac 1 {4 n \delta^2}\) $\forall t \in \closedint 0 1: \dfrac 1 4 \ge t \paren {1 - t}$

Here $\ds \sum_{k \mathop : \size {\frac k n \mathop - t} \ge \delta}^n$ denotes the summation over those values of $k \in \N$, $k \le n$, which satisfy the inequality $\size {\dfrac k n - t} \ge \delta$.

For some $\delta > 0$ denote:

$\ds \map {\omega_\delta} x := \sup_{\size {t - s} < \delta} \size {\map x s - \map x t}$

Then:

\(\ds \size {\map {B_n x} t - \map x t}\) \(=\) \(\ds \size {\map {B_n x} t - \map x t \sum_{k \mathop = 0}^n \map {p_{n, k} } t}\)
\(\ds \) \(=\) \(\ds \size {\sum_{k \mathop = 0}^n \map x {\frac k n} \map {p_{n, k} } t - \map x t \sum_{k \mathop = 0}^n \map {p_{n, k} } t}\)
\(\ds \) \(\le\) \(\ds \sum_{k \mathop = 0}^n \size {\map x {\frac k n} - \map x t} \map {p_{n, k} } t\) as $\map {p_{n, k} } t \ge 0$
\(\ds \) \(=\) \(\ds \sum_{k \mathop : \size {\frac k n \mathop - t} < \delta}^n \size {\map x {\frac k n} - \map x t} \map {p_{n, k} } t + \sum_{k \mathop : \size {\frac k n \mathop - t} \ge \delta}^n \size {\map x {\frac k n} - \map x t} \map {p_{n, k} } t\)
\(\ds \) \(\le\) \(\ds \map {\omega_\delta} x \sum_{k \mathop : \size {\frac k n \mathop - t} < \delta}^n \map {p_{n, k} } t + 2 \norm x_\infty \frac 1 {4 n \delta^2}\)
\(\ds \) \(\le\) \(\ds \map {\omega_\delta} x \cdot 1 + \frac {\norm x_\infty} {2 n \delta^2}\)

where $\norm {\,\cdot \,}_\infty$ denotes the supremum norm.


Let $\epsilon > 0$.

By Continuous Function on Closed Real Interval is Uniformly Continuous, $\map x t$ is uniformly continuous.

We choose $\delta > 0$ such that $\map {\omega_\delta} x < \dfrac \epsilon 2$.

Choose $n > \dfrac {\norm x_\infty} {\epsilon \delta^2}$

Then:

$\norm {\map {B_n x} t - \map x t}_\infty < \epsilon$.

$\blacksquare$


Proof 2

Without loss of generality, assume $\Bbb I = \closedint 0 1$

For each $n \in \N$, let:

$\ds \map {P_n} x := \sum_{k \mathop = 0}^n \map f {\dfrac k n } \dbinom n k x^k \paren {1 - x}^{n - k}$

We shall show that $\lim_{n \to \infty} \norm { P_n - f}_\infty = 0$.


Let $\epsilon \in \R_{>0}$.

By Heine-Cantor Theorem, there is a $\delta \in \R_{>0}$ such that:

$\forall x,y \in \Bbb I : \size {x - y} \le \delta \implies \size {\map f x - \map f y} \le \epsilon $


Let $p \in \Bbb I$.

Let $Z_n$ be a random variable such that:

$\ds n Z_n \sim \Binomial n p$

where $\Binomial n p$ denotes the binomial distribution with parameters $n$ and $p$.

Observe that:

\(\ds \expect {\map f {Z_n} }\) \(=\) \(\ds \sum_{k \mathop = 0}^n \map f {\dfrac k n} \map \Pr {Z_n = \dfrac k n}\) Definition of Expectation of Discrete Random Variable
\(\ds \) \(=\) \(\ds \sum_{k \mathop = 0}^n \map f {\dfrac k n} \map \Pr {n Z_n = k}\)
\(\ds \) \(=\) \(\ds \sum_{k \mathop = 0}^n \map f {\dfrac k n} \dbinom n k p^k \paren {1 - p}^{n - k}\) Definition of Binomial Distribution
\(\text {(1)}: \quad\) \(\ds \) \(=\) \(\ds \map {P_n} p\)

Furthermore:

\(\ds \map \Pr {\size { Z_n - p} > \delta }\) \(\le\) \(\ds \dfrac {\expect {\size {Z_n - p } ^2} } {\delta^2}\) Bienaymé-Chebyshev Inequality
\(\ds \) \(=\) \(\ds \dfrac {\expect {\size {n Z_n - n p} ^2} } {\delta^2 n^2}\) Expectation is Linear
\(\ds \) \(=\) \(\ds \dfrac {\expect {\size {n Z_n - \expect {n Z_n} } ^2} } {\delta^2 n^2}\) Expectation of Binomial Distribution
\(\ds \) \(=\) \(\ds \dfrac {\var {n Z_n} } {\delta^2 n^2}\) Definition of Variance
\(\ds \) \(=\) \(\ds \dfrac {p \paren {1 - p} } {\delta^2 n}\) Variance of Binomial Distribution
\(\ds \) \(\le\) \(\ds \dfrac 1 {4 \delta^2 n}\) Cauchy's Mean Theorem


On the other hand:

\(\ds \size {\map f {Z_n} - \map f p}\) \(\le\) \(\ds \epsilon + \size {\map f {Z_n} - \map f p } \chi_{\set {\size {Z_n - p} > \delta } }\)
\(\ds \) \(\le\) \(\ds \epsilon + 2 \norm f_\infty \chi_{\set {\size { Z_n - p } > \delta} }\)


Therefore:

\(\ds \size {\map {P_n} p - \map f p}\) \(=\) \(\ds \size {\expect {\map f { Z_n } } - \map f p}\) by $(1)$
\(\ds \) \(=\) \(\ds \size {\expect {\map f {Z_n} } - \expect {\map f p} }\) Expectation of Almost Surely Constant Random Variable
\(\ds \) \(=\) \(\ds \size {\expect {\map f {Z_n} - \map f p} }\) Expectation is Linear
\(\ds \) \(\le\) \(\ds \expect {\size {\map f {Z_n} - \map f p} }\)
\(\ds \) \(\le\) \(\ds \epsilon + 2 \norm f_\infty \map \Pr {\size {Z_n - p} > \delta}\)
\(\ds \) \(\le\) \(\ds \epsilon + \dfrac {\norm f_\infty} {2 \delta^2 n}\)

Thus for all $n \in \N_{> 2 \delta^2 / \norm f_\infty}$ we have:

$\size {\map {P_n} p - \map f p} \le 2 \epsilon$

As the above is true for all $p \in \Bbb I$, we have:

$\forall n \in \N_{> 2 \delta^2 / \norm f_\infty} : \norm { P_n - f}_\infty \le 2 \epsilon$

$\blacksquare$


Proof 3

Let $\AA \subseteq \map C {\Bbb I, \R}$ be the set of real polynomial functions.

$\AA$ is a subalgebra of $\map C {\Bbb I, \R}$ because polynomials over $\R$ form an algebra over $\R$.

Let $I$ denote the identity mapping on $\Bbb I$, i.e.:

$\forall x \in \Bbb I : \map I x = x$

Then $I \in \AA$.

Thus $\AA$ separates the points of $\Bbb I$, since trivially:

$\forall x,y \in \Bbb I : x \ne y \implies \map I x \ne \map I y$

It is also clear that $1 \in \AA$.

Therefore the claim follows from Stone-Weierstrass Theorem.

$\blacksquare$


Also known as

The Weierstrass Approximation Theorem is also seen referred to as Weierstrass's Theorem, but as there are a number of results bearing Karl Weierstrass's name, it makes sense to be more specific.


Source of Name

This entry was named for Karl Weierstrass.


Historical Note

The Weierstrass Approximation Theorem has been demonstrated to have far-reaching and important effects in every aspect of the field of analysis.

It has also been given a significant generalisation by Marshall Harvey Stone.


Sources