anarchy.website
Toggle Dark Mode

Notes on Differential Equations

By Una Ada, April 27, 2018

Background

There are ordinary differential equations and partial differential equations which differ in that the formal is with respect to (wrt) a single variable where as the latter may be of multiple variables or a single variable of a multivariate function. The general form is

\[a_n(x)\frac{d^ny}{dx^n}+a_{n-1}(x)\frac{d^{n-1}y}{dx^{n-1}}+\cdots+a_1(x) \frac{dy}{dx}+a_0(x)y=F(x),\tag1\]

and any differential equation that does not meet this form is considered to be nonlinear.

Solutions & Initial Value Problems

The general form of a function where $x$ is the independent and $y$ is the dependent variable is

\[F\left(x,y,\frac{dy}{dx},\frac{d^2y}{dx^2},\ldots,\frac{d^ny}{dx^n}\right)=0, \tag2\label2\]

or

\[\frac{d^ny}{dx^n}=F\left(x,y,\frac{dy}{dx},\frac{d^2y}{dx^2},\ldots, \frac{d^{n-1}y}{dx^{n-1}}\right).\tag3\label3\]

We then define some function $\phi(x)$ that when substituted for $y$ in either $\eqref{2}$ or $\eqref{3}$ for all $x$ on some interval $I$ is called an explicit solution to the equation on $I$. That sounds really cool, but at the moment it doesn’t really mean anything.

On the other hand, a relation $G(x,y)$ is said to be an implicit solution to $\eqref{3}$ on the interval $I$ if it defines one or more explicit solutions. This often defines multiple solutions because of the constant term that is created through integration.

An initial value problem (IVP) for an nth-order differential equation ($\eqref{3}$) will ask you to find a solution to the interval $I$ that satisfies

\[y(x_0)=y_0,\frac{dy}{dx}(x_0)=y_1,\ldots,\frac{d^{n-1}y}{dx^{m-1}}(x_0)= y_{n-1},\tag4\]

where $x_0\in I$ and $y_0,y_1,\ldots,y_{n-1}$ are given constants. [11] To determine the existence and uniqueness of a solution we use Picard’s Theorem:

Consider the initial value problem $dy/dx=f(x,y)$ and $y(x_0)=y_0$, if $f$ and $\partial f/\partial y$ are continuous functions in some rectangle $R= \{ (x,y)\mid a\lt x\lt b, c\lt y\lt d \}$ that contains the point $(x_0,y_0)$, then the IVP has a unique solution $\phi(x)$ in some interval $x_0-\delta\lt x\lt x_0+\delta$, where $\delta\gt0$.

The Approximation Method of Euler

The approximation method of Euler is super tedious to do by hand and so should be completely avoided unless you’re using it for numerical analysis / programming. To approximate $\phi(x)$ repeat the following for all steps $n$:

\[\begin{align} x_{n+1}&=x_n+h,\tag5 \\ y_{n+1}&=y_n+h\cdot f(x_n,y_n),\tag6\end{align}\]

where $h$ is the step size.

Separable Equations

A differential equation is separable when it can be expressed as

\[\frac{dy}{dx}=f(x)g(y),\tag7\]

which you would solve as

\[\int\frac1{g(y)}dy=\int f(x)dx,\tag8\]

and given some function $h(y)=1/g(y)$ this would be expressed as

\[H(y)=G(x)+C.\tag9\]

Linear Equations

The general equation for a linear first-order differential equation is

\[a_1(x)\frac{d}{dx}y(x)+a_0(x)y(x)=b(x).\tag{10}\label{10}\]

We find two scenarios in which the solution to $\eqref{10}$ is simple. First when $a_0(x)\equiv0$:

\[\begin{align}a_1(x)\frac{d}{dx}y(x)&=b(x),\tag{11a}\\ y(x)&=\int\frac{b(x)}{a_1(x)}dx+C.\tag{11b}\end{align}\]

Second, the slightly less straightforward solution for when $a_0(x)=a_1’(x)$, which would mean that the LHS can be written as the derivative of a product:

\[\begin{align} \frac{d}{dx}\left[a_1(x)\cdot y(x)\right]&=b(x),\tag{12a}\label{12}\\ y(x)&=\frac1{a_1(x)}\left[\int b(x)dx+C\right].\tag{12b}\label{12b} \end{align}\]

Of course, it is rarely so convenient as to be in this form on its own, so we need to find a way to rewrite the equation so it fits the criteria of the second solution. We do this by multiplying by some magical function $\mu(x)$. First step to achieving this is to write the equation in standard form:

\[\frac{dy}{dx}+P(x)\cdot y=Q(x),\tag{13}\label{13}\]

where $P(x)=a_0(x)/a_1(x)$ and $Q(x)=b(x)/a_1(x)$. From here we can determine that we need $\mu(x)$ to be some function such that

\[\mu(x)\cdot\frac{dy}{dx}+\mu(x)\cdot P(x)\cdot y=Q(x)\tag{14}\]

is the derivative of the product $\mu(x)\cdot y$:

\[\begin{align}\mu(x)\cdot\frac{dy}{dx}+\mu(x)\cdot P(x)\cdot y&=\frac{d}{dx} \left[\mu(x)\cdot y\right]\tag{15a}\\ &=\mu(x)\cdot\frac{dy}{dx}+\mu'(x)\cdot y\tag{15b}\\ \mu(x)\cdot P(x)&=\mu'(x).\tag{15c}\label{15c}\end{align}\]

Obviously, there’s only one good solution to $\eqref{15c}$:

\[\mu(x)=e^{\int P(x)dx}.\tag{16}\]

Somehow (since $\eqref{13}$ was created by dividing $\eqref{10}$ by $a_1(x)$ and $\mu(x)$ was calculated to be a function that when multiplied by $\eqref{13}$ would satisfy $\eqref{12}$, $\mu(x)$ can be substituted for $a_1(x)$ in $\eqref{12b}$) this allows us to find the general solution:

\[y(x)=\frac1{\mu(x)}\left[\int\mu(x)Q(x)dx+C\right].\tag{17}\]

Exact Equations

The differential form

\[M(x,y)dx+N(x,y)dy\tag{18}\]

is said to be exact in some rectangle $R$ if there is a function $F(x,y)$ such that

\[\frac{\partial}{\partial x}F(x,y)=M(x,y)\tag{19}\]

and

\[\frac{\partial}{\partial y}F(x,y)=N(x,y)\tag{20}\]

for all $(x,y)$ in $R$. If the aforementioned form is exact then

\[M(x,y)dx+N(x,y)dy=0\tag{21}\]

is called an exact equation. Exactness can be tested by using the equation

\[\frac\partial{\partial y}M(x,y)=\frac\partial{\partial x}N(x,y).\tag{22}\]

Method for solving differential equations that are exact:

  1. If $Mdx+Ndy=0$ is exact then $\partial F/\partial x=M$ which can be integrated wrt $x$ yielding

    \[F(x,y)=\int M(x,y)dx+g(y).\tag{23}\label{23}\]
  2. Differentiate $\eqref{23}$ wrt $y$ finding

    \[\begin{align}N(x,y)&=\frac d{dy}\int M(x,y)dx+g'(y)\tag{24a} \\ g'(y)&=N(x,y)-\frac d{dy}\int M(x,y)dx.\tag{24b}\end{align}\]
  3. Integrate $g’(y)$ to obtain $g(y)$ and substitute it into $\eqref{23}$ to find $F(x,y)$.

  4. The solution is given implicitly by

    \[F(x,y)=C.\tag{25}\]

This can also be done using $N(x,y)$ and an integral wrt $y$ in step 1 instead of $M(x,y)$ and wrt $x$.

The Mass-Spring Oscillator

The differential equation that describes the motion of a mass on a spring is

\[my''+by'+ky=F_\text{ext}(t),\tag{26}\]

where $m\ge0$ is the mass of the mass, $b\ge0$ is the damping coefficient of the spring, and $k$ is the spring constant (stiffness) of the spring. When $b,F=0$ there is a solution in the form $y(t)=cos(\omega t)$, where $\omega$ is the frequency defined by $\sqrt{k/m}$.

Homogeneous Linear Equations: The General Solution

The next few sections are all about second order differential equations, meaning that they are of the form

\[ay''+by'+cy=0.\tag{27}\]

By looking at this we should be able to tell that the second derivative needs to be able to be expressed as a linear combination of the first and zeroth. This suggests we try to find solutions of the form $e^{rt}$. By substituting this in to the general form we find

\[\begin{align} 0&=ar^2e^{rt}+bre^{rt}+ce^{rt}\tag{28a} \\ &=e^{rt}(ar^2+br+c)\tag{28b}\label{28b} \\ 0&=ar^2+br+c.\tag{28c}\label{28c} \end{align}\]

Since $e^{rt}\neq0$ we can divide by it in $\eqref{28b}$. $\eqref{28c}$ is called the auxiliary or characteristic equation.

The trivial solution ($y(t)\equiv0$) is always a solution. If we have any pair of solutions $y_1$ and $y_2$ we can construct an infinite number of linear combinations that are also solutions (the proof of this is just typical superposition stuff, so trust me and/or try it out for yourself):

\[y(t)=c_1y_1(t)+c_2y_2(t).\tag{29}\]

The two degrees of freedom ($c_1$ and $c_2$) imply that two consitions can be imposed, such as $y(0)$ and $y’(0)$ in an initial value problem. This will lead to systems of linear equations, so get ready for that.

This can also be used to prove uniqueness. For any real numbers $a\neq0$, $b$, $c$, $t_0$, $Y_0$, and $Y_1$, there exists a unique solution to the initial value problem

\[ay''+by'+cy=0, y(t_0)=Y_0, y'(t_0)=Y_1,\tag{30}\]

which is valid for all $t$ in $(-\infty,\infty)$. A particularly important case is that od $Y_0=Y_1=0$, when $y(t)$ must be intically zero ($y(t)\equiv0$). Really important things happen if $y_1$ and $y_2$ are linearly dependent, then $y_2(t)=k\cdot y_1(t)$ and so

\[c_1y_1(t)+c_2y_2(t)=(c_1+k\cdot c_2)y_1(t)=C\cdot y_1(t),\tag{31}\]

which is only one constant ($C$) and thus only one degree of freedom. As such we need linearly independent functions for $y_1$ and $y_2$, so definitely not just like constant multiples of each other. Then we’re going to need a way to tell if things are linearly independent or not.

For any real numbers $a\neq0$, $b$, and $c$, if $y_1$ and $y_2$ are any two solutions to the differential equation $ay’‘+by’+cy=0$ in $(-\infty,\infty)$ and the equality

\[y_1(\tau)y_2'(\tau)-y_2(\tau)y_1'(\tau)=0\tag{32}\label{32}\]

holds for any point $\tau$, then $y_1$ and $y_2$ are linearly dependent in $(-\infty,\infty)$. Fun fact, the LHS in $\eqref{32}$ is called the Wronskian of $y_1$ and $y_2$ at $\tau$, which is written like this:

\[W[y_1,y_2](\tau)=\begin{vmatrix}y_1(\tau)&y_2(\tau)\\y_1'(\tau)&y_2'(\tau) \end{vmatrix}=y_1(\tau)y_2'(\tau)-y_2(\tau)y_1'(\tau).\tag{33}\]

Auxiliary Equations with Complex Roots

When $b^2-4ac\lt0$ then the roots to the auxiliary equation are imaginary:

\[r=\alpha\pm\beta i, \alpha=-\frac b{2a}, \beta=\frac{\sqrt{4ac-b^2}} {2a},\tag{34}\]

so we end up with $e^{rt}=e^{(\alpha+\beta i)t}=e^{\alpha t}e^{i\beta t}$. We know what $e^{\alpha t}$ is because it matches the form $e^{at}$ but what about the $e^{i\beta t}$ term? Well, we’re going to do some fun math:

\[\begin{align}e^{i\theta}&=1+i\theta+\frac{(i\theta)^2}{2!}+\cdots+ \frac{(i\theta)^n}{n!}+\cdots\tag{35a}\\ &=1+i\theta-\frac{\theta^2}{2!}-\frac{i\theta^3}{3!}+\frac{\theta^4}{4!} +\frac{i\theta^5}{5!}+\cdots\tag{35b}\\ &=\left(1-\frac{\theta^2}{2!}+\frac{\theta^4}{4!}+\cdots\right)+ i\left(\theta-\frac{\theta^3}{3!}+\frac{\theta^5}{5!}+\cdots\right)\tag{35c}\\ e^{i\theta}&=\cos(\theta)+i\sin(\theta)\tag{35d}\end{align}\]

This is known as Euler’s Formula because as mathematicians it is our job to suck Euler’s dick 24/7 and name everything he touched after him. Plugging this back into the auxiliary equation we get

\[y(t)=e^{\alpha t}(\cos(\beta t)+i\sin(\beta t))c_1+e^{\alpha t} (\cos(\beta t)+i\sin(\beta t))c_2,\tag{36}\]

which still has $i$ so it isn’t a “real” solution. So, imposing societal standards on it through some mathematical wizardry I’m not 100% certain about it we find the new form:

\[y(t)=c_1e^{\alpha t}\cos(\beta t)+c_2e^{\alpha t}\sin(\beta t).\tag{37}\]

Nonhomogenous Equations: The Method of Undetermined Coefficients

Both the book and my class went through this whole bit they called “judicial guessing” which is just glorified trial and error to find some method by which solutions could be found. I’ll be skipping that because lmao fuck off.

To find a particular solution to the differential equation

\[ay''+by'+cy=Ct^me^{rt},\tag{38}\]

where $m$ is a non-negative integer, we use the form

\[y_p(t)=t^s\left(A_mt^m+\cdots+A_1t+A_0\right)e^{rt},\tag{39}\]

where $s$ is $0$ if $r$ is not a root, $1$ is $r$ is a simple root, and $2$ if $r$ is a double root of the auxiliary equation. To find a particular solution to the differential equation

\[ay''+by'+cy=Ct^me^{\alpha t}\cos(\beta t)\text{ or }Ct^me^{\alpha t} \sin(\beta t)\tag{40}\]

for $\beta\neq0$, we use the form

\[y_p(t)=t^s\left(A_mt^m+\cdots+A_1t+A_0\right)e^{\alpha t}\cos(\beta t) +t^s\left(B_mt^m+\cdots+B_1t+B_0\right) e^{\alpha t}\sin(\beta t),\tag{41}\label{41}\]

where $s$ is $0$ if $\alpha+\beta i$ is not a root and $1$ if $\alpha+\beta i$ is a root of the auxiliary equation.

The Superposition Principle

The superposition principle goes like: Let $y_1$ by the solution to

\[ay''+by'+cy=f_1(t)\tag{42}\]

and $y_2$ be the solution to

\[ay''+by'+cy=f_2(t).\tag{43}\]

Then, for any constants $k_1$ and $k_2$, the function $k_1y_1+k_2y_2$ is a solution to

\[ay''+by'+cy=k_1f_1(t)+k_2f_2(t).\tag{44}\]

Onto the topic of existence and uniqueness: for any real numbers $a\neq0$, $b$, $c$, $t_0$, $Y_0$, and $Y_1$, suppose $y_p(t)$ is a particular solution in an interval $I$ where $t_0\in I$ and that $y_1$ and $y_2$ are linearly independent solutions to the homogeneous equation in $I$, then there exists a unique solution in $I$ to the initial value problem

\[ay''+by'+cy=f(t),y(t_0)=Y_0,y'(t_0)=Y_1\tag{45}\]

and said solution is given by the form

\[y(t)=y_p(t)+c_1y_1(t)+c_2y_2(t)\tag{46}\]

for the appropriate choice of constant $c_1$ and $c_2$. Now, the superposition principle and the method of undetermined coefficients make sweet, sweet love and birth a hybrid, which looks something like this:

\[ay''+by'+cy=P_m(t)e^{rt},\tag{47}\]

where $P_m$ is a polynomial of degree $m$. The solution then is of the form

\[y_p(t)=t^s\left(A_mt_m+\cdots+A_1t+a_0\right)e^{rt},\tag{48}\]

where $s$ is determined by the same method as previously. It could also look something like this:

\[ay''+by'+cy=P_m(t)e^{\alpha t}\cos(\beta t)+Q_n(t)e^{\alpha t} \sin(\beta t),\beta\neq0,\tag{49}\]

where $P_m(t)$ and $Q_n(t)$ are polynomials of degree $m$ and $n$ respectively. The solution is then of the form

\[y_p(t)=t^s\left(A_kt^k+\cdots+A_1t+A_0\right)e^{\alpha t}\cos(\beta t) +t^s\left(B_kt^k+\cdots+B_1t+B_0\right)e^{\alpha t}\sin(\beta t),\tag{50}\]

where $k$ is the larger of $m$ and $n$, and $s$ is defined by the conditions given for $\eqref{41}$.

Variable Coefficient Equations

Solving an equation of the form

\[a_2(t)y''+a_1(t)y'+a_0(t)y=f(t),\tag{51}\]

typically we divide both sides by $a_2$ to achieve the standard form

\[y''+p(t)y'+q(t)y=g(t),y(t_0)=y_0,y'(t_0)=Y_1,\tag{52}\]

where $p(t)=a_1(t)/a_2(t)$, $q(t)=a_0(t)/a_2(t)$, $g(t)=f(t)/a_2(t)$, and $Y_0$ and $Y_1$ are some constants. Some theorem simply called “Theorem 5” gives a way to test for existence and uniqueness here:

Suppose $p(t)$, $q(t)$, and $g(t)$ are continuous on the interval $(a,b)$ which contains the point $t_0$, then for any choice of initial values $Y_0$ and $Y_1$ there exists a unique solution $y(t)$ on the same interval $(a,b)$ to the initial value problem.

A linear second-order differential equation that can be expressed in the form

\[at^2y''+bty+cy=f(t),\tag{53}\]

where $a$, $b$, and $c$ are constants, is called a Gauchy-Euler equation. To solve these we must acquire the characteristic equation, for this we simply substitute $t^r$ for $y$, which gives us

\[ar^2+(b-a)r+c=0,\tag{54}\]

from which we should hopefully be able to derive the values $r_1$ and $r_2$ such that

\[y_1=t^{r_1},y_2=t^{r_2}.\tag{55}\]

If $r$ is in the form $\alpha\pm\beta i$ then we’ve got complex roots and the solutions will be of the form

\[y_1=t^\alpha\cos(\beta\ln t),y_2=t^\alpha\sin(\beta\ln t),\tag{56}\]

and if $r$ is a double root the solutions will be of the form

\[y_1=t^r,y_2=t^r\ln t.\tag{57}\]

Assuming $y_1$ and $y_2$ are linearly independent, then we can declare the linear combinations $y_b=c_1y_1+c_2y_2$ and $y_p=v_1y_1+v_2y_2$. Assuming some interval $I$ where $p$, $q$, and $g$ are continuous, by substituting y_p$ we can find (I’m sparing the actual derivation of this because I don’t care):

\[\begin{align}y_1v_1'+y_2v_2'&=0\tag{58a}\\ y_1'v_1'+y_2'v_2'&=g,\tag{58b}\end{align}\]

where we specifically chose the first equation so we could avoid $v’’$ being involved in the equation. There’s a shortcut for finding these solutions that we weren’t allowed to use in class but this isn’t class so here it is:

\[\begin{align}v_1(t)&=\int\frac{-g(t)y_2(t)}{W[y_1,y_2](t)}dt,\tag{59a}\\ v_2(t)&=\int\frac{g(t)y_1(t)}{W[y_1,y_2](t)}dt,\tag{59b}\end{align}\]

and in case we need to find another solution:

\[y_2(t)=y_1(t)\int\frac{e^{-\int p(t)dt}}{y_1(t)^2}dt.\tag{60}\]

Introduction: The Taylor Polynomial Series

Quick review, this is a Taylor Series:

\[\sum_{j=0}^\infty\frac{f^{(j)}(x_0)}{j!}(x-x_0)^j,\tag{61}\]

and a Taylor Polynomial is just the first $n$ terms of this.

Power Series and Analytic Functions

A power series about the point $x_0$ is an expression of the form

\[f=\sum_{n=0}^\infty a_n(x-x_0)^n=a_0+a_1(x-x_0)+a_2(x-x_0)^2+\cdots.\tag{62}\]

I’m assuming you’ve taken at least Calculus II, so I’m just going to skip to more relevant information. If the series has a positive radius of convergence $\rho$, then $f$ is differentiable in the interval $|x-x_0|\le\rho$ and termwise differentiation gives us

\[f'=\sum na_n(x-x_0)^{n-1}, |x-x_0|<\rho,\tag{63}\]

and termwise integration would give us

\[\int f\,dx=\sum(n+1)^{-1}a_n(x-x_0)^{n+1}+C, |x-x_0|<\rho.\tag{64}\]

For reference, the index of the summation is a dummy variable with no bearing outside of the context of the summation, so we can shift it around by constants and maybe variables if you know what you’re doing to clean up the summation a bit.

Power Series Solutions to Linear Differential Equations

A point $x_0$ is called an ordinary point of the equation if both $p=a_1/a_2$ and $q=a_0/a_2$ are analytic at $x_0$. Otherwise we would call it a singular point. We’re not too concerned about the exact definition of analytic here, so we’ll just use defined and continuous.

There’s this thing called a recurrence relation. By plugging in the generic power series to the differential equation equal to $\sum0\cdot x^n$ (the series equivalent to just $0$) and shifting all the series to have the generic term $x^n$, you’ll get some equation of $a_n$, $a_{n+1}$, and/or $a_{n+2}$, which will allow you to find some relation between $a_{n+2}$ or $a_{n+1}$ and $a_n$. By solving for several terms using this relation you can find some pattern by which to define $a_n$, a solution that can be plugged into $y=\sum a_nx^n$ to find the actual solution.

When your coefficients are functions, you basically just multiplying the function out termwise and you can simplify from there or whatever.

Definition of the Laplace Transform

The Laplace Transform of $f$ is the function $F$ defined as

\[\mathcal{L}\{f(t)\}(s)=F(s):=\int_0^\infty e^{-st}f(t)dt.\tag{65}\]

Sometimes we’ll have to solve these by hand, over odd intervals, for weird functions, especially at first. Eventually we can just use a Laplace Transform Table (there isn’t one on this page, so just Google for it, I suppose).

This transform is linear, so we can say that

\[\mathcal{L}\{f_1+f_2\}=\mathcal{L}\{f_1\}+\mathcal{L}\{f_2\},\tag{66}\]

and

\[\mathcal{L}\{c\cdot f\}=c\cdot\mathcal{L}\{f\}.\tag{67}\]

The Laplace Transform of $f$ exists when $f$ is piecewise continuous on $[0,\infty)$. The transform is generally pretty useful in simplifying some differential equations into easier algebraic equations.

Properties of the Laplace Transform

A translation in $s$ looks like

\[\mathcal\{e^{at}f(t)\}(s)=F(s-a).\tag{68}\]

The Laplace Transform for the derivative of $f$ (assuming $f$ is of exponential order $a$) for the first and nnth derivatives:

\[\begin{align} \mathcal{L}\{f'(t)\}(s)&=s\cdot\mathcal{L}\{f(t)\}(s)-f(0), \tag{69a}\label{69} \\ \mathcal{L}\{f^{(n)}(t)\}(s)&=s^n\cdot\mathcal{L}\{f(t)\}(s)-s^{n-1}\cdot f(0)- s^{n-2}\cdot f'(0)-\cdots-1\cdot f^{(n-1)}(0).\tag{69b} \end{align}\]

For an equation of form $t^nf(t)$ we can use the formula

\[\mathcal{L}\{t^nf(t)\}(s)=(-1)^n\frac{d^n}{ds^n}F(s).\tag{70}\]

The Inverse Laplace Transform is the opposite of the Laplace Transform such that when $\mathcal{L}\{f\}=F$ there exists the inverse $\mathcal{L}^{-1} \{F\}=f$. This is also linear. Keep in mind the method of partial fractions for this. For solving IVPs with the Laplace Transform, apply the transform to both sides of the equation, solve for $Y(s)$ ($\mathcal{L}\{y(t)\}=Y(s)$), then apply the inverse transform. Also remember to use $\eqref{69}$ (nice) for a lot of problems.