User:J D Bowen/Math725 HW7

From ProofWiki
Jump to navigation Jump to search

1) Let $(\mathbb{F}^n, S) \ $ be $\mathbb{F}^n \ $ with the standard basis, and let $(\mathbb{F}^n, B) \ $ be the same vector space with basis $B \ $. We know that $N(S)=B \iff \ $ the columns of $N \ $ are the vectors of $B \ $.

Then $T = \ $ multiplication by $M \ $, $N:(\mathbb{F}^n, S)\to (\mathbb{F}^n, B), \ T:(\mathbb{F}^n, B)\to (\mathbb{F}^n, B) \ $ means

$\mathfrak{M}_B^B (T) = N^{-1}TN \ $.


2)

a) Let $A \ $ be an $n\times n \ $ diagonal matrix over $\mathbb{F} \ $. This represents a linear operator over $\mathbb{F}^n \ $. Since it is diagonal, there exists constants $c_i \ $ such that $a_{ij}=c_i \delta_{ij} \ $, and the eigenvalues are precisely the $c_i \ $, since they are the roots of $\Pi (c_i-\lambda)=0 \ $.

Let $\vec{c}_i = (0,\dots,\underbrace{c_i}_{i^{th} \ \text{position}},0,\dots,0)^t \ $. Then note that

$A\vec{c}_i=(0,\dots,c_i^2,0,\dots,0)^t= c_i (0,\dots,c_i,0,\dots,0)^t=c_i\vec{c}_i \ $

and so the $\vec{c}_i \ $ are eigenvectors. Since the are necessarily linearly independent, and since there are precisely $n \ $ of them, they span $\mathbb{F}^n \ $, the domain.


b)

We have the $\Leftarrow \ $ from (a). So $V^n \ $ is an n-dimensional vector space and suppose $T:V^n\to V^n \ $ is a diagonalizable operator. Then there is a set $B=\left\{{\vec{v}_1, \dots, \vec{v}_m}\right\} \ $ of eigenvectors of $T \ $ that span the domain, $V^n \ $. Since $T \ $ can have at most $n \ $ linearly independent eigenvectors, and since they span the domain, $B \ $ must be a basis for $V^n \ $. Then $\mathfrak{M}_B^B (T) \ $ is just $m_{ij}=c_i \delta_{ij} \ $, where $T\vec{v}_i=c_i\vec{v}_i \ $. This is a diagonal form for the operator.

c)

Let $E_i =\text{span}(\vec{v}_i) \ $ be the eigenspace associated with $\vec{v}_i \ $. Since $\Sigma E_i = \Sigma \text{span}(\vec{v}_i) = \text{span}(v_1,\dots,v_m) \ $, we have $\Sigma E_i = V \iff V=\text{span}(v_1,\dots,v_m) \ $, and by part (b), we have $T \ \text{diagonalizable} \iff V=\text{span}(v_1,\dots,v_m) \ $, so the theorem follows.

d) If $b=0 \ $ the matrix is diagonal and we may apply part (a). So assume $b\neq 0 \ $.

Suppose $a\neq c \ $. Then the eigenvalues are precisely $a,c \ $. Some eigenvectors, then, are

$\vec{v}=(1,0) \ $ and $\vec{u}=(1,(c-a)/b) \ $

Since $c\neq a \ $, the second term of $\vec{u} \ $ is not zero, and so the two vectors are linearly independent and thus span the space; hence, the matrix is diagonalizable.

Now suppose that the matrix is diagonalizable. Observe that as before the eigenvalues are $a,c \ $, and so for the first eigenvector $\vec{v}=(v_1,v_2) \ $, we have

$av_1+bv_2=av_1, \ cv_2=av_2 \implies v_2 =0\implies (1,0) \ \text{is an eigenvector} \ $.

For the second eigenvector, we have

$au_1+bu_2=cu_1 \implies bu_2=(c-a)u_1 \ $.

Since we know $\vec{u} \ $ is linearly independent from $\vec{v} \ $, we must have either $(0,x) \ $ or $(1,x)\land x\neq 0 \ $ as possible vectors for $\vec{u} \ $. If we suppose $u_1=0 \ $, then $b=0 \ $. If we suppose $u_1=1 \ $, then we have $u_2=(c-a)/b\neq 0 \implies c\neq a \ $. Therefore $T \ $ diagonalizable implies either $a\neq c \ \lor b=0 \ $.


3) Let $M, N, D \ $ be $n\times n \ $ matrices, with $D \ $ diagonal. Then we can write $d_{ij}=d_j \delta_{ij} \ $.

Suppose $MN=ND \ $. For each column $\vec{n}_j \ $ of $N \ $and each $d_j \ $, this is equivalent to stating $M\vec{n}_j = \vec{n}_j d_j \ $, which is equivalent to stating that the $\vec{n}_j \ $ are eigenvectors of $M \ $ with eigenvalue $d_j \ $.

Next observe that if $N^{-1} \ $ is invertible, then kernel of $N \ $ is $0 \ $ and so the columns span the space. Since the columns are the eigenvectors of $M \ $, the eigenvectors of $M \ $ span the space, and so $M \ $ is diagonalizable.



4) Suppose $\pi_1 : V\to W_1, \pi_2:V\to W_2 \ $ are surjective linear maps with the same kernel. Observe $\text{dim}(\text{ker}(\pi_1)+\text{dim}(W_1)=\text{dim}(V)=\text{dim}(\text{ker}(\pi_2)+\text{dim}(W_2) \implies \text{dim}(W_1)=\text{dim}(W_2) \ $.

Define $T:W_1\to W_2 \ $ as $T(x)=\pi_2\left({\frac{\pi_1^{-1}(x)}{\text{ker}(\pi_1)} }\right) \ $. Then $(T\circ \pi_1)(x)= \pi_2(\pi_1^{-1}(\pi_1(x)))=\pi_2(x) \ $.

Since vectors space are groups under addition, the first isomorphism theorem guarantees this is a bijection.