Math 309B/C Winter 2012, Homework 2
Due January 18
Also, I have a few examples
of how to sketch solutions to some first order equations that
should help you do some of the problems below.
- Suppose that $x^{(1)},...,x^{(n)}$ are solutions to a 1st
order homogeneous system of $n$ equations $x'=Ax$ on the
interval $(\alpha,\beta)$ (where $x(t)$ is an $n$-dimensional
vector function and $A(t)$ is an $n\times n$ matrix function.
Show that $W(t)=W(x^{(1)},...,x^{(n)})(t)\neq 0$ for all $t\in
(\alpha,\beta)$ or $W(t)=0$ for all $t\in (\alpha,\beta)$.
Hint: Remember that if $W(t_{0})=0$ for some $t_{0}$, then
the vectors $x^{(1)}(t_{0}),...,x^{(n)}(t_{0})$ are linearly
dependent, i.e., there is a linear combonation
\[c_{1}x^{(1)}(t_{0})+\cdots + c_{n}x^{(n)}(t_{0})=0,\] where
not all the $c_{j}$ are zero. Let's say it's $c_{1}$, so
$c_{1}\neq 0$. Then $x^{(1)}(t_{0})$ can be written as a linear
combination of $x^{(2)}(t_{0}),...,x^{(n)}(t_{0})$ (write down
for yourself what this would look like, you just solve the above
equation for $x^{(1)}(t_{0})$. Why will this same linear
combination hold for all $t\in (\alpha,\beta)$ and not
just $t_{0}$? (This uses a theorem we used in class last time).
Why then does this show that $W(t)=0$ for all $t$?
Answer: I'll give a slightly different proof than what I
hinted at but it's pretty much the same. Suppose $W(t_0)=0$ for
some $t_{0}$. Then $\det(x^{(1)}(t_0),...,x^{(n)}(t_0))=0$
(since this is what the Wronksian is). Hence,
$x^{(1)}(t_0),...,x^{(n)}(t_0)$ are a linearly dependent set of
vectors, so there are constants $c_1,...,c_n$ (not all zero)
such that \[c_1 x^{(1)}(t_0)+...+c_n x^{(n)}(t_0)=0.\] Define \[
f(t)=c_1 x^{(1)}(t)+...+c_n x^{(n)}(t).\] Then by the above
formula, $f(t_0)=0$ and $f$ solves $x'=Ax$ (since it is a linear
combination of solutions). Hence, it solves the initial value
problem \[\left\{ \begin{array}{c} x'=Ax \\
x(t_0)=0.\end{array}\right. .\] But $x=0$ also solves this same
initial value problem, and by the uniqueness of solutions to
initial value problems, these two functions $f$ and $0$ must be
the same for all $t$, i.e. \[0=f(t)= c_1 x^{(1)}(t)+...+c_n
x^{(n)}(t)\] for all $t$, so $x^{(1)}(t),...,x^{(n)}(t)$ are
linearly dependent for all $t$, and hence $W(t)=0$ for all $t$.
Now suppose $W(t_0)\neq 0$ for some $t_0$. Then $W(t)\neq 0$ for
all $t$, since if $W(t_1)=0$ for some other number $t_1$, then
by the previous work $W(t)=0$ for all $t$, which is impossible
since $W(t_0)\neq 0$. Hence, $W(t)$ is either always zero or
always nonzero.
- Let $x'=Ax$ be a 1st order homogeneous system of linear
equations defined in $(\alpha,\beta)$ and $t_{0}\in
(\alpha,\beta)$. Suppose $x^{(1)},...,x^{(n)}$ are solutions to
the initial value problem
$x^{(1)}(t_{0})=e_{1},...,x^{(n)}(t_{0})=e_{n}$, where
$e_{1},...,e_{n}$ are the standard basis vectors. Show that
these form a fundamental set of solutions. (Hint: remember what
we concluded at the end of Wednesday's lecture was necessary for
some functions to be a fundamental set of solutions). then the
solutions.
Answer: Recall that $x^{(1)},...,x^{(n)}$ are
fundamental solutions if and only if their Wronskian is always
nonzero. By a theorem (or really, by the previous problem), we
only need to check that $W(t)\neq 0$ somewhere (note, you are
allowed to use this result on future homeworks or exam problems,
unless I'm asking you to prove it). So let's compute $W(t_0)$,
\[W(t_0)=\det(x^{(1)}(t_0)|\cdots | x^{(n)}(t_0)) =
\det(e_1|\cdots |e_n)=\det I=1\neq 0.\]
- Draw trajectories of solutions for the following equations:
- $x'=\left(\begin{array}{cc}3 & -2 \\ 2 & -2
\end{array}\right)x$.
Answer: This matrix has eigenvectors
$v_1=\left(\begin{array}{c} 1 \\ 2 \end{array}\right)$ and
$v_2=\left(\begin{array}{c} 2 \\ 1 \end{array}\right)$ with
eigenvalues $-1$ and $2$ respectively. Hence, the general
solution is \[x(t)= c_1 \left(\begin{array}{c} 1 \\ 2
\end{array}\right) e^{-t} + c_2 \left(\begin{array}{c} 2 \\
1 \end{array}\right)e^{2t}.\] To plot, notice that as
$t\rightarrow\infty$, the first vector decays to zero and
the function looks approximately like $c_2
\left(\begin{array}{c} 2 \\ 1 \end{array}\right)e^{2t}$,
i.e. it's moving parallel to $v_2$, and as
$t\rightarrow-\infty$, the opposite is happening: it is
moving parallel with $v_1$. In either case, because the
eigenvalues have opposite signs, the solution is unstable in
both directions, i.e. the solutions move way from the origin
as $t\rightarrow\pm\infty$, and so typical solutions should
look like such (the lines parallel to $v_1$ and $v_2$ have
been omitted):
- $x'=\left(\begin{array}{cc}1 & -2 \\ 3 & -4
\end{array}\right)x$.
- $x'=\left(\begin{array}{cc}1 & 1 \\ 4 & -2
\end{array}\right)x$
- $x'=\left(\begin{array}{cc}2 & -1 \\ 3 & -2
\end{array}\right)x$
- $x'=\left(\begin{array}{cc}-2 & 1 \\ 1 & -2
\end{array}\right)x$
Answer:
This matrix has eigenvectors $v_1=\left(\begin{array}{c} 1
\\ 1 \end{array}\right)$ and $v_2=\left(\begin{array}{c} 1
\\ -1 \end{array}\right)$ with eigenvalues $-1$ and $-3$
respectively. Hence, the general solution is \[x(t)= c_1
\left(\begin{array}{c} 1 \\ 1 \end{array}\right) e^{-t} +
c_2 \left(\begin{array}{c} 1 \\ -1
\end{array}\right)e^{-3t}.\] Note that all solutions are
decaying since both eigenvalues are negative, so all
solutions go into the origin. Note that the first part of
the general solution is much larger than the second for very
large $t$, hence the solution gradually becomes more
parallel with $v_1$ as $t\rightarrow\infty$. As
$t\rightarrow-\infty$, the opposite happens: the solution
becomes more parallel with $v_2$ since the second term in
the general solution is much larger for these values of $t$.
Hence, the solutions have the following sketches:
- $x'=\left(\begin{array}{cc}4 & -3 \\ 8 & -6
\end{array}\right)x$ This matrix has eigenvectors
$v_1=\left(\begin{array}{c} 3 \\ 4 \end{array}\right)$ and
$v_2=\left(\begin{array}{c} 1 \\ 2 \end{array}\right)$ with
eigenvalues $0$ and $-1$ respectively. Hence, the general
solution is \[x(t)= c_1 \left(\begin{array}{c} 3 \\ 4
\end{array}\right) + c_2 \left(\begin{array}{c} 1 \\ 2
\end{array}\right)e^{-t}.\] The sketches of solutions to
this should look like lines parallel to $v_2$ descending
straight onto the line through the origin parallel to $v_1$.
The reason is that as $t\rightarrow\infty$, $x(t)\rightarrow
c_1 \left(\begin{array}{c} 3 \\ 4 \end{array}\right)$.
Hence, we know our solution converges to some point on the
$v_1$ line. Note that $x'$ is always just a multiple of
$\left(\begin{array}{c} 1 \\ 2 \end{array}\right)$, hence
the direction $x$ is always parallel to this vector, and
this is enough to conclude the description.
- $x'=\left(\begin{array}{cc}3 & -2 \\ 2 & -2
\end{array}\right)x$
- $x'=\left(\begin{array}{cc}3 & 6 \\ -1 & -2
\end{array}\right)x$
- Find the general solutions to the following systems. If there
are any initial conditions, solve the initial value problem and
describe the behavior of the solution as $t\rightarrow\infty$.
- $x'=\left(\begin{array}{ccc} 1 & 1 & 2 \\ 1 &
2 & 1 \\ 2 & 1 & 1 \end{array}\right)x$
Answer: The general solution is \[x(t)=c_1
\left(\begin{array}{c} 1 \\ 1 \\ 1\end{array}\right)e^{4t}
+c_2 \left(\begin{array}{c} 1 \\ -2 \\
1\end{array}\right)e^{t} +c_3 \left(\begin{array}{c} 1 \\ 0
\\ -1\end{array}\right)e^{-t}.\] If a solution begins on the
line parallel to $\left(\begin{array}{c} 1 \\ 0 \\
-1\end{array}\right)$, then it is stable and converges into
the origin as $t\rightarrow \infty$ (since then
$c_1=c_2=0$). Otherwise, the solutions will diverge to
infinity in the direction \[c_1 \left(\begin{array}{c} 1 \\
1 \\ 1\end{array}\right) +c_2 \left(\begin{array}{c} 1 \\ -2
\\ 1\end{array}\right),\] so notice that the direction it
takes depends on the initial conditions (i.e. $c_1$ and
$c_2$).
- $x'=\left(\begin{array}{ccc} 3 & 2 & 4 \\ 2 &
0 & 2 \\ 4 & 2 & 3 \end{array}\right)x$
- $x'=\left(\begin{array}{ccc} 1 & -1 & 4 \\ 3
& 2 & -1 \\ 2 & 1 & -1 \end{array}\right)x$
- $x'=\left(\begin{array}{ccc} 1 & 1 & 2 \\ 0 &
2 & 2 \\ -1 & 1 & 3 \end{array}\right)x$,
$x(0)=\left(\begin{array}{c} 2 \\ 0 \\ 1
\end{array}\right)$. Answer: The general solution
is \[x(t)=c_1\left(\begin{array}{c} 0 \\ -2 \\
1\end{array}\right)e^{t} +c_2 \left(\begin{array}{c} 1 \\ 1
\\ 0\end{array}\right)e^{2t} +c_3\left(\begin{array}{c} 2 \\
2 \\ 1\end{array}\right)e^{3t}.\] To solve the initial value
problem, set \[\left(\begin{array}{c} 2 \\ 0 \\ 1
\end{array}\right)=x(0)=c_1\left(\begin{array}{c} 0 \\ -2 \\
1\end{array}\right) +c_2 \left(\begin{array}{c} 1 \\ 1 \\
0\end{array}\right) +c_3\left(\begin{array}{c} 2 \\ 2 \\
1\end{array}\right) = \left(\begin{array}{c} 0 \\ -2 \\
1\end{array} \begin{array}{c} 1 \\ 1 \\ 0\end{array}
\begin{array}{c} 2 \\ 2 \\ 1\end{array}\right)
\left(\begin{array}{c} c_1 \\ c_2 \\
c_3\end{array}\right),\] and now all you need to do is solve
this matrix equation. The solution is
\[\left(\begin{array}{c} c_1 \\ c_2 \\
c_3\end{array}\right)=\left(\begin{array}{c} 1 \\ 2 \\
0\end{array}\right),\] and hence the solution to this
initial value problem is \[ x(t)= \left(\begin{array}{c} 0
\\ -2 \\ 1\end{array}\right)e^{t} +2 \left(\begin{array}{c}
1 \\ 1 \\ 0\end{array}\right)e^{2t}.\]
- $x'=\left(\begin{array}{ccc} 0 & 0 & -1 \\ 2
& 0 & 0 \\ -1 & 2 & 4 \end{array}\right)x$,
$x(0)=\left(\begin{array}{c} 7 \\ 5 \\ 5
\end{array}\right)$.