User:J D Bowen/Math725 HW4

From ProofWiki
Jump to navigation Jump to search

1) Let $V, W \ $ be finite-dimensional vector spaces. We aim to show that the following three statements are equivalent:

a) $\text{dim}(V)\geq \text{dim}(W) \ $
b) There is a surjective linear map $V \to W \ $.
c) There is an injective linear map $W \to V \ $.


(a)$\implies \ $ (b):

Suppose we have $\text{dim}(V)\geq \text{dim}(W) \ $. There exists basis sets for these spaces $\left\{{\vec{v_1}, \dots, \vec{v_m}}\right\}\subset V, \left\{{\vec{w_1}, \dots, \vec{w_n} }\right\} \subset W \ $, where $m=\text{dim}(V), n=\text{dim}(W) \ $. We have $m\geq n \ $.

Define a linear transformation $\phi:V\to W \ $ as $\sum_{i=1}^m a_i\vec{v_i} \mapsto \sum_{i=1}^n a_i\vec{w_i} \ $. Since $m\geq n \ $, every vector $\vec{w}\in W \ $ has a preimage in $V \ $. Hence $\phi \ $ is a surjection.


(b)$\implies \ $ (a):

Suppose there exists at least one surjective linear map $\phi:V\to W \ $. If we let $\left\{{\vec{v_1}, \dots, \vec{v_m}}\right\}\subset V \ $ be any basis for $V \ $, we can be sure that every vector in $V \ $ can be represented as $\vec{v} = \sum_{i=1}^m a_i\vec{v_i} \ $ for some constants $a_i \ $.

Since $\phi \ $ is linear, we have

$\phi(\vec{v}) = \sum_{i=1}^m a_i\phi(\vec{v_i}) \ $.

Since $\phi \ $ is surjective, for every $\vec{w}\in W \ $, there is at least one $\vec{v}\in V \ $ such that $\phi(\vec{v})=\vec{w} \ $. Therefore, for any $\vec{w}\in W \ $, there are constants $a_i \ $ such that

$\vec{w}=\sum_{i=1}^m a_i\phi(\vec{v_i}) \ $.

Hence the $\phi(\vec{v_i}) \ $ form a spanning set for $W \ $, implying that $\text{dim}(W)\leq \text{dim}(V) \ $.



(a)$\implies \ $ (c):

Suppose we have $\text{dim}(V)\geq \text{dim}(W) \ $. There exists basis sets for these spaces $\left\{{\vec{v_1}, \dots, \vec{v_m}}\right\}\subset V, \left\{{\vec{w_1}, \dots, \vec{w_n} }\right\} \subset W \ $, where $m=\text{dim}(V), n=\text{dim}(W) \ $. We have $m\geq n \ $.

Define a linear transformation $\phi:W\to V \ $ as $\sum_{i=1}^n a_i\vec{w_i} \mapsto \sum_{i=1}^n a_i\vec{v_i} \ $. Since $n\leq m \ $, every $\vec{v}\in V \ $ has at most one pre-image (1 or 0, depending on whether or not $\vec{v} \ $ is in the subspace $a_{n+1}=\dots=a_m =0 \ $ or not). Hence $\phi \ $ is an injection.



(c)$\implies \ $(a):

Suppose there exists at least one injective linear map $\phi:W\to V \ $. Let a basis for $W \ $ be $\left\{{\vec{w_1}, \dots, \vec{w_n} }\right\} \subset W \ $.

We have $\sum_{i=1}^n a_i\vec{w_i}=\vec{0} \implies (\forall i, \ a_i=0) \ $.

Since $\phi \ $ is linear, we have two important facts:

1) $\phi(\vec{0_W})=\vec{0_V} \ $
2) $\phi \left({\sum_{i=1}^n a_i\vec{w_i} }\right) = \sum_{i=1}^n a_i\phi(\vec{w_i}) \ $

Now, since $\phi \ $ is an injection, for $\vec{u},\vec{v}\in V, \ \vec{u}\neq\vec{v} \ $, we have $\phi(\vec{u})\neq\phi(\vec{v}) \ $. This, combined with fact 1, means that the only vector which maps to $\vec{0_V} \ $ is $\vec{0_W} \ $.

Now we can use fact 2 to investigate if the image of the basis vectors in $W \ $ is linearly independent in $V \ $:

$\sum_{i=1}^n a_i\phi(\vec{w_i}) = \vec{0_V} \implies \phi \left({\sum_{i=1}^n a_i\vec{w_i} }\right) = \vec{0_V} \implies \sum_{i=1}^n a_i\vec{w_i}=\vec{0_W} \implies (\forall i, \ a_i=0) $

Therefore, the $\phi(\vec{w_i}) \ $ are linearly independent in $V \ $, which means $\text{dim}(V)\geq n = \text{dim}(W) \ $.


work space

2) Let $U, W \ $ be subspaces of a vector space $V \ $. Define $r \ $ as the map taking any map from $V\to W \ $ to its restriction $U\to W \ $.

a) Let $\phi,\psi \ $ be any two linear maps $V\to W \ $. Then $\forall \vec{u}\in U, a,b\in\mathbb{C} \ $, we have

$r(a\phi+b\psi)(\vec{u}) = (a\phi+b\psi)(\vec{u})=a\phi(\vec{u})+b\psi(\vec{u})=a (r(\phi))(\vec{u})+b(r(\psi))(\vec{u}) \ $

since these two maps are linear. Since $r(\psi),r(\phi) \ $ can take no other values than vectors in $U \ $ this is the only case we need to consider. Hence, $r \ $ is linear.


b) Let $\sigma:U\to W \ $ be a linear map.


c)

end workspace

3) Let $S,T \ $ be linear operators on a vector space $V \ $.


a) Let $\vec{x}\in\text{ker}(T) \ $, that is, $T(\vec{x})=\vec{0} \ $. Then $ST(\vec{x})=S(T(\vec{x})) = S(\vec{0}) \ $.

But $S \ $ is linear, so $S(\vec{y})=S(\vec{0}+\vec{y})=S(\vec{0})+S(\vec{y}) \implies S(\vec{0})=\vec{0} \ $.

Hence $T(\vec{x})=\vec{0}\implies ST(\vec{x})=\vec{0} \ $, so $\text{ker}(T)\subseteq \text{ker}(ST) \ $.


b) Let $T:\mathbb{R}^2 \to \mathbb{R}^2 \ $ be the linear map defined $x\vec{e}_1+y\vec{e_2}\mapsto -y\vec{e}_1+x\vec{e_2} \ $, and let $S:\mathbb{R}^2\to\mathbb{R}^2 \ $ be defined $x\vec{e}_1+y\vec{e_2}\mapsto x\vec{e}_1 \ $. Then $\text{ker}(S)=\left\{{ c\vec{e_2}:c\in\mathbb{R} }\right\}, \text{ker}(ST)=\left\{{c\vec{e_1}:c\in\mathbb{R}}\right\} \ $, and so $\text{ker}(S)\not\subset\text{ker}(ST) \ $.



c) Suppose $\exists \vec{y}\in\text{Im}(ST) \ $. Then $\exists \vec{x}:ST(\vec{x})=\vec{y} \ $, observe that

$ST(\vec{x})=S( T(\vec{x})) \ $

So there exists a $T(\vec{x}) \ $ such that $\vec{y}=S(T(\vec{x})) \ $. Hence, $y\in\text{Im}(S) \ $.

We have $\text{Im}(ST)\subseteq\text{Im}(S) \ $.


d) Let $T:\mathbb{R}^2 \to \mathbb{R}^2 \ $ be the linear map defined $x\vec{e}_1+y\vec{e_2}\mapsto x\vec{e}_1 \ $, and let $S:\mathbb{R}^2\to\mathbb{R}^2 \ $ be defined $x\vec{e}_1+y\vec{e_2}\mapsto -y\vec{e}_1+x\vec{e_2} \ $. Then $\text{Im}(ST) = \left\{{ c\vec{e_2}:c\in\mathbb{R}}\right\} , \text{Im}(T)=\left\{{ c\vec{e_1}:c\in\mathbb{R}}\right\} \ $, and we note $\text{Im}(ST)\not\subset\text{Im}(T) \ $




4) Let $\mathbf{P} \ $ be the space of polynomials over a field $\mathbf{F} \ $.

a) Let $P,Q \in \mathbf{P} \ $ be defined $P=p_m x^m + \dots +p_0, Q = q_n x^n + \dots + 0 \ $.

If we let the constants be zero for subscripts not summed over in the relevant polynomial, then observe that for the differential operator,

$d_x(aP+bQ)= d_x \left({ \sum_{i=0}^{\text{max}(m,n)} (ap_i+bq_i)x^i }\right) = \sum_{i=0}^{\text{max}(m,n)}d_x\left({ (ap_i+bq_i)x^i }\right) = \sum_{i=1}^{\text{max}(m,n)} i\left({ (ap_i+bq_i)x^{i-1} }\right)$

$=\sum_{i=1}^{\text{max}(m,n)} ap_iix^{i-1}+bq_iix^{i-1} = \sum_{i=1}^m ap_iix^{i-1} + \sum_{i=1} bq_iix^{i-1} = a\sum_{i=1}^m p_iix^{i-1} + b\sum_{i=1} q_iix^{i-1}= ad_x(P)+bd_x(Q) \ $

So, the differential operator is linear. Note that the two polynomials $x\mapsto 1, x\mapsto 2 \ $ both have derivative $x\mapsto 0 \ $, and so the differential operator is not injective. Note also that for any polynomial $P = p_m x^m + \dots +p_0 \ $, there is another polynomial $Q= (p_m/(m+1))x^{m+1} + \dots p_0 x \ $ such that $d_x Q = P \ $. Since $\mathbf{F}= \mathbb{R} \ $ or $\mathbb{C} \ $, we can be sure that $p_k/(k+1) \ $ is an element of $\mathbf{F} \ $. Hence the differential operator is surjective.



b) Fix $a\in\mathbf{F} \ $ and let $I_a:\mathbf{P}\to\mathbf{P} \ $ be

$f\mapsto \int_a^x f \ $

Observe that for field elements $r,s \ $, and polynomials $P, Q \ $, we have

$I_a(rP+sQ)=\int_a^x (rP+sQ)dx= \int_a^x rPdx+\int_a^x sQdx = r\int_a^xPdx+s\int_a^xQdx = rI_a(P)+sI_a(Q) \ $,

so $I_a \ $ is linear.


c) If we let $P=\sum_{i=0}^n p_i x^i \ $ be any polynomial, then note that

$\frac{d}{dx} \int_a^x P = \frac{d}{dx} \int_a^x \left({ \sum_{i=0}^n p_i x^i }\right) dx= \frac{d}{dx} \left({ \sum_{i=0}^n \frac{p_i}{i+1} x^{i+1} }\right) = \sum_{i=0}^n p_i x^i= P \ $

and so $I_a \ $ is a right inverse for $d_x \ $. Note, however, that for that same polynomial,

$\int_a^x \frac{d}{dt} P = \int_a^x\frac{d}{dt} \left({ p_0+p_1t+\dots+p_nt^n }\right) dt= \int_a^x \left({ p_1+p_2t + \dots +np_nt^{n-1} }\right) $

$=\left[{ p_1t + p_2t^2+\dots+p_nt^n }\right]^x_a = p_1x + p_2x^2+\dots+p_nx^n + C \ $

where $C=-P(a) \ $ is some constant. Note that this means for some polynomial $P \ $ with constant term $p_0 \ $, the expression $I_a \ $ is a left inverse of the differential operator if and only if $P(a)=-p_0 \ $. Therefore, it is not possible for any one $I_a \ $ to be a left inverse of the differential operator for all polynomials.