Definition:Method of Least Squares (Approximation Theory)

From ProofWiki
Jump to navigation Jump to search

This page is about Method of Least Squares in the context of Approximation Theory. For other uses, see Method of Least Squares.

Definition

Let there be a set of points $\set {\tuple {x_k, y_k}: k \in \set {1, 2, \ldots, n} }$ plotted on a Cartesian $x y$ plane which correspond to measurements of a physical system.

Let it be required that a straight line is to be fitted to the points.


The method of least squares is a technique of producing a straight line of the form $y = m x + c$ such that:

the points $\set {\tuple {x_k', y_k'}: k \in \set {1, 2, \ldots, n} }$ are on the line $y = m x + c$
$\forall k \in \set {1, 2, \ldots, n}: x_k' = x_k$
$\ds \sum_n \paren {y_k' - y_k}^2$ is minimised.


LeastSquares.png

Examples

Arbitrary Example

Let $B$ be a false balance.

$2$ items are weighed on $B$: first individually and then together.

The recorded weights are:

$17 \, \mathrm g$ and $25 \, \mathrm g$ for the separate items
$40 \, \mathrm g$ for the combined weight.

The least squares estimates of the true weights are the values of $\hat {w_1}$ and $\hat {w_2}$ that minimize:

$L = \paren {w_1 - 25}^2 + \paren {w_1 - 17}^2 + \paren {w_1 + w_2 - 40}^2$

Differentiating with respect to $w_1$ and $w_2$ and equating the derivatives to zero, gives us:

\(\ds \hat {w_1}\) \(=\) \(\ds 16.33\)
\(\ds \hat {w_2}\) \(=\) \(\ds 24.33\)


Also see


Historical Note

The method of least squares was invented by Carl Friedrich Gauss.


Sources