Derivation of Hamilton-Jacobi Equation
It has been suggested that this page be renamed. To discuss this page in more detail, feel free to use the talk page. |
This article needs to be linked to other articles. You can help $\mathsf{Pr} \infty \mathsf{fWiki}$ by adding these links. To discuss this page in more detail, feel free to use the talk page. When this work has been completed, you may remove this instance of {{MissingLinks}} from the code. |
Theorem
Let $\map S {x_0, x_1, \mathbf y} = \map S {x, \mathbf y}$ be the geodetic distance, where $x_0$ is fixed and $x_1=x$.
Let $H$ be Hamiltonian.
Then the following equation holds:
- $\dfrac {\partial S} {\partial x} + \map H {x, \mathbf y, \nabla_{\mathbf y} S} = 0$
and is known as the Hamilton-Jacobi Equation.
Proof
Consider the increment $\Delta S$:
- $\Delta S = \map S {x + \Delta x, \mathbf y + \Delta \mathbf y} - \map S {x, \mathbf y}$
Note that the change of function $\mathbf y$ denoted by $\Delta \mathbf y$ is dependent on the manner $\Delta x$ is chosen through the definition of geodetic distance.
For sufficiently smooth $S$, $\size {\Delta \mathbf y} \to 0$ as $\size {\Delta \mathbf x} \to 0$.
This article, or a section of it, needs explaining. In particular: sufficiently You can help $\mathsf{Pr} \infty \mathsf{fWiki}$ by explaining it. To discuss this page in more detail, feel free to use the talk page. When this work has been completed, you may remove this instance of {{Explain}} from the code. |
By definition of differential, $\Delta S$ can be written as:
- $\map {\Delta S} {x, \mathbf y; \Delta x, \Delta \mathbf y} = \map {\d S} {x, \mathbf y; \Delta x, \Delta \mathbf y} + \epsilon \Delta x + \boldsymbol \epsilon \cdot \Delta \boldsymbol y$
where $\epsilon \to 0$ as $\Delta x \to 0$, and $\size {\mathbf y} \to 0$ as $\size {\Delta \mathbf x} \to 0$.
By definition of the geodetic distance,
- $\Delta S = J \sqbrk {\gamma^*} - J \sqbrk \gamma$
where $\gamma$ and $\gamma^*$ are extremal curves, connecting the fixed initial point with points $\tuple {x, \mathbf y}$ and $\tuple {x + \Delta x, \mathbf y + \mathbf h}$ respectively.
By definition of increment of functional:
- $J \sqbrk {\gamma^*} - J \sqbrk \gamma = \Delta J \sqbrk {\gamma; \Delta \gamma}$
where $\Delta \gamma = \gamma^* - \gamma$.
A differentiable $J$ can be expressed as:
- $\Delta J \sqbrk {\gamma; \Delta \gamma} = \delta J \sqbrk {\gamma; \Delta \gamma} + \epsilon_\gamma \cdot \size {\Delta \gamma}$
where $\epsilon_\gamma \to 0$ as $\size {\Delta \gamma} \to 0$, and $\size {\Delta \gamma} \to 0$ as $\size {\Delta \mathbf x} \to 0$ for sufficiently smooth $S$.
To summarise:
- $\Delta \map S {x, \mathbf y; \Delta x, \Delta \mathbf y} = \Delta J \sqbrk {\gamma; \Delta \gamma}$
This article, or a section of it, needs explaining. In particular: change in $x$ should be reflected on both sides, but literature omits this You can help $\mathsf{Pr} \infty \mathsf{fWiki}$ by explaining it. To discuss this page in more detail, feel free to use the talk page. When this work has been completed, you may remove this instance of {{Explain}} from the code. |
Both sides contain terms linear in $\size {\Delta x}$, $\size {\Delta \mathbf y}$, $\size {\Delta \gamma}$ as well terms of higher order.
Higher order terms on both sides are the same.
This article, or a section of it, needs explaining. In particular: why You can help $\mathsf{Pr} \infty \mathsf{fWiki}$ by explaining it. To discuss this page in more detail, feel free to use the talk page. When this work has been completed, you may remove this instance of {{Explain}} from the code. |
Hence, the principal parts match:
- $\d S = \delta J$
The variation of extremal $J$ is expressible as
- $\ds \delta J = \sum_{i \mathop = 1}^n p_i \Delta y_i - H \Delta x$
while the differential of $S$ is
- $\ds \d S = \frac {\partial S} {\partial x} \Delta x + \sum_{i \mathop = 1}^n \frac {\partial S} {\partial y_i} \Delta y_i$
Equivalently:
- $\ds \paren {\frac {\partial S} {\partial x} + H} \Delta x + \sum_{i \mathop = 1}^n \paren {\frac {\partial S} {\partial y_i} - p_i} \Delta y_i = 0$
$\Delta x$ and $\Delta y_i$ are independent variables.
The equation holds only if all the coefficients in front of $\Delta x$ and $\Delta y_i$ vanish simultaneously:
- $\dfrac {\partial S} {\partial x} = -H, \quad \dfrac {\partial S} {\partial y_i} = p_i$
Since $H = \map H {x, \mathbf y, \mathbf p}$, using the second relation to replace $\mathbf p$ together with the first one proves the formula.
This needs considerable tedious hard slog to complete it. In particular: the proof makes sense, but very technical aspects are missing To discuss this page in more detail, feel free to use the talk page. When this work has been completed, you may remove this instance of {{Finish}} from the code.If you would welcome a second opinion as to whether your work is correct, add a call to {{Proofread}} the page. |
Source of Name
This entry was named for William Rowan Hamilton and Carl Gustav Jacob Jacobi.
Sources
- 1963: I.M. Gelfand and S.V. Fomin: Calculus of Variations ... (previous) ... (next): $\S 4.23$: The Hamilton-Jacobi Equation. Jacobi's Theorem