anarchy.website
Toggle Dark Mode

Notes on Differential Equations

By Una Ada, April 27, 2018

Background

There are ordinary differential equations and partial differential equations which differ in that the formal is with respect to (wrt) a single variable where as the latter may be of multiple variables or a single variable of a multivariate function. The general form is

(1)an(x)dnydxn+an1(x)dn1ydxn1++a1(x)dydx+a0(x)y=F(x),

and any differential equation that does not meet this form is considered to be nonlinear.

Solutions & Initial Value Problems

The general form of a function where x is the independent and y is the dependent variable is

(2)F(x,y,dydx,d2ydx2,,dnydxn)=0,

or

(3)dnydxn=F(x,y,dydx,d2ydx2,,dn1ydxn1).

We then define some function ϕ(x) that when substituted for y in either (2) or (3) for all x on some interval I is called an explicit solution to the equation on I. That sounds really cool, but at the moment it doesn’t really mean anything.

On the other hand, a relation G(x,y) is said to be an implicit solution to (3) on the interval I if it defines one or more explicit solutions. This often defines multiple solutions because of the constant term that is created through integration.

An initial value problem (IVP) for an nth-order differential equation ((3)) will ask you to find a solution to the interval I that satisfies

(4)y(x0)=y0,dydx(x0)=y1,,dn1ydxm1(x0)=yn1,

where x0I and y0,y1,,yn1 are given constants. [11] To determine the existence and uniqueness of a solution we use Picard’s Theorem:

Consider the initial value problem dy/dx=f(x,y) and y(x0)=y0, if f and f/y are continuous functions in some rectangle R={(x,y)a<x<b,c<y<d} that contains the point (x0,y0), then the IVP has a unique solution ϕ(x) in some interval x0δ<x<x0+δ, where δ>0.

The Approximation Method of Euler

The approximation method of Euler is super tedious to do by hand and so should be completely avoided unless you’re using it for numerical analysis / programming. To approximate ϕ(x) repeat the following for all steps n:

(5)xn+1=xn+h,(6)yn+1=yn+hf(xn,yn),

where h is the step size.

Separable Equations

A differential equation is separable when it can be expressed as

(7)dydx=f(x)g(y),

which you would solve as

(8)1g(y)dy=f(x)dx,

and given some function h(y)=1/g(y) this would be expressed as

(9)H(y)=G(x)+C.

Linear Equations

The general equation for a linear first-order differential equation is

(10)a1(x)ddxy(x)+a0(x)y(x)=b(x).

We find two scenarios in which the solution to (10) is simple. First when a0(x)0:

(11a)a1(x)ddxy(x)=b(x),(11b)y(x)=b(x)a1(x)dx+C.

Second, the slightly less straightforward solution for when a0(x)=a1(x), which would mean that the LHS can be written as the derivative of a product:

(12a)ddx[a1(x)y(x)]=b(x),(12b)y(x)=1a1(x)[b(x)dx+C].

Of course, it is rarely so convenient as to be in this form on its own, so we need to find a way to rewrite the equation so it fits the criteria of the second solution. We do this by multiplying by some magical function μ(x). First step to achieving this is to write the equation in standard form:

(13)dydx+P(x)y=Q(x),

where P(x)=a0(x)/a1(x) and Q(x)=b(x)/a1(x). From here we can determine that we need μ(x) to be some function such that

(14)μ(x)dydx+μ(x)P(x)y=Q(x)

is the derivative of the product μ(x)y:

(15a)μ(x)dydx+μ(x)P(x)y=ddx[μ(x)y](15b)=μ(x)dydx+μ(x)y(15c)μ(x)P(x)=μ(x).

Obviously, there’s only one good solution to (15c):

(16)μ(x)=eP(x)dx.

Somehow (since (13) was created by dividing (10) by a1(x) and μ(x) was calculated to be a function that when multiplied by (13) would satisfy (12a), μ(x) can be substituted for a1(x) in (12b)) this allows us to find the general solution:

(17)y(x)=1μ(x)[μ(x)Q(x)dx+C].

Exact Equations

The differential form

(18)M(x,y)dx+N(x,y)dy

is said to be exact in some rectangle R if there is a function F(x,y) such that

(19)xF(x,y)=M(x,y)

and

(20)yF(x,y)=N(x,y)

for all (x,y) in R. If the aforementioned form is exact then

(21)M(x,y)dx+N(x,y)dy=0

is called an exact equation. Exactness can be tested by using the equation

(22)yM(x,y)=xN(x,y).

Method for solving differential equations that are exact:

  1. If Mdx+Ndy=0 is exact then F/x=M which can be integrated wrt x yielding

    (23)F(x,y)=M(x,y)dx+g(y).
  2. Differentiate (23) wrt y finding

    (24a)N(x,y)=ddyM(x,y)dx+g(y)(24b)g(y)=N(x,y)ddyM(x,y)dx.
  3. Integrate g(y) to obtain g(y) and substitute it into (23) to find F(x,y).

  4. The solution is given implicitly by

    (25)F(x,y)=C.

This can also be done using N(x,y) and an integral wrt y in step 1 instead of M(x,y) and wrt x.

The Mass-Spring Oscillator

The differential equation that describes the motion of a mass on a spring is

(26)my+by+ky=Fext(t),

where m0 is the mass of the mass, b0 is the damping coefficient of the spring, and k is the spring constant (stiffness) of the spring. When b,F=0 there is a solution in the form y(t)=cos(ωt), where ω is the frequency defined by k/m.

Homogeneous Linear Equations: The General Solution

The next few sections are all about second order differential equations, meaning that they are of the form

(27)ay+by+cy=0.

By looking at this we should be able to tell that the second derivative needs to be able to be expressed as a linear combination of the first and zeroth. This suggests we try to find solutions of the form ert. By substituting this in to the general form we find

(28a)0=ar2ert+brert+cert(28b)=ert(ar2+br+c)(28c)0=ar2+br+c.

Since ert0 we can divide by it in (28b). (28c) is called the auxiliary or characteristic equation.

The trivial solution (y(t)0) is always a solution. If we have any pair of solutions y1 and y2 we can construct an infinite number of linear combinations that are also solutions (the proof of this is just typical superposition stuff, so trust me and/or try it out for yourself):

(29)y(t)=c1y1(t)+c2y2(t).

The two degrees of freedom (c1 and c2) imply that two consitions can be imposed, such as y(0) and y(0) in an initial value problem. This will lead to systems of linear equations, so get ready for that.

This can also be used to prove uniqueness. For any real numbers a0, b, c, t0, Y0, and Y1, there exists a unique solution to the initial value problem

(30)ay+by+cy=0,y(t0)=Y0,y(t0)=Y1,

which is valid for all t in (,). A particularly important case is that od Y0=Y1=0, when y(t) must be intically zero (y(t)0). Really important things happen if y1 and y2 are linearly dependent, then y2(t)=ky1(t) and so

(31)c1y1(t)+c2y2(t)=(c1+kc2)y1(t)=Cy1(t),

which is only one constant (C) and thus only one degree of freedom. As such we need linearly independent functions for y1 and y2, so definitely not just like constant multiples of each other. Then we’re going to need a way to tell if things are linearly independent or not.

For any real numbers a0, b, and c, if y1 and y2 are any two solutions to the differential equation ay+by+cy=0 in (,) and the equality

(32)y1(τ)y2(τ)y2(τ)y1(τ)=0

holds for any point τ, then y1 and y2 are linearly dependent in (,). Fun fact, the LHS in (32) is called the Wronskian of y1 and y2 at τ, which is written like this:

(33)W[y1,y2](τ)=|y1(τ)y2(τ)y1(τ)y2(τ)|=y1(τ)y2(τ)y2(τ)y1(τ).

Auxiliary Equations with Complex Roots

When b24ac<0 then the roots to the auxiliary equation are imaginary:

(34)r=α±βi,α=b2a,β=4acb22a,

so we end up with ert=e(α+βi)t=eαteiβt. We know what eαt is because it matches the form eat but what about the eiβt term? Well, we’re going to do some fun math:

(35a)eiθ=1+iθ+(iθ)22!++(iθ)nn!+(35b)=1+iθθ22!iθ33!+θ44!+iθ55!+(35c)=(1θ22!+θ44!+)+i(θθ33!+θ55!+)(35d)eiθ=cos(θ)+isin(θ)

This is known as Euler’s Formula because as mathematicians it is our job to suck Euler’s dick 24/7 and name everything he touched after him. Plugging this back into the auxiliary equation we get

(36)y(t)=eαt(cos(βt)+isin(βt))c1+eαt(cos(βt)+isin(βt))c2,

which still has i so it isn’t a “real” solution. So, imposing societal standards on it through some mathematical wizardry I’m not 100% certain about it we find the new form:

(37)y(t)=c1eαtcos(βt)+c2eαtsin(βt).

Nonhomogenous Equations: The Method of Undetermined Coefficients

Both the book and my class went through this whole bit they called “judicial guessing” which is just glorified trial and error to find some method by which solutions could be found. I’ll be skipping that because lmao fuck off.

To find a particular solution to the differential equation

(38)ay+by+cy=Ctmert,

where m is a non-negative integer, we use the form

(39)yp(t)=ts(Amtm++A1t+A0)ert,

where s is 0 if r is not a root, 1 is r is a simple root, and 2 if r is a double root of the auxiliary equation. To find a particular solution to the differential equation

(40)ay+by+cy=Ctmeαtcos(βt) or Ctmeαtsin(βt)

for β0, we use the form

(41)yp(t)=ts(Amtm++A1t+A0)eαtcos(βt)+ts(Bmtm++B1t+B0)eαtsin(βt),

where s is 0 if α+βi is not a root and 1 if α+βi is a root of the auxiliary equation.

The Superposition Principle

The superposition principle goes like: Let y1 by the solution to

(42)ay+by+cy=f1(t)

and y2 be the solution to

(43)ay+by+cy=f2(t).

Then, for any constants k1 and k2, the function k1y1+k2y2 is a solution to

(44)ay+by+cy=k1f1(t)+k2f2(t).

Onto the topic of existence and uniqueness: for any real numbers a0, b, c, t0, Y0, and Y1, suppose yp(t) is a particular solution in an interval I where t0I and that y1 and y2 are linearly independent solutions to the homogeneous equation in I, then there exists a unique solution in I to the initial value problem

(45)ay+by+cy=f(t),y(t0)=Y0,y(t0)=Y1

and said solution is given by the form

(46)y(t)=yp(t)+c1y1(t)+c2y2(t)

for the appropriate choice of constant c1 and c2. Now, the superposition principle and the method of undetermined coefficients make sweet, sweet love and birth a hybrid, which looks something like this:

(47)ay+by+cy=Pm(t)ert,

where Pm is a polynomial of degree m. The solution then is of the form

(48)yp(t)=ts(Amtm++A1t+a0)ert,

where s is determined by the same method as previously. It could also look something like this:

(49)ay+by+cy=Pm(t)eαtcos(βt)+Qn(t)eαtsin(βt),β0,

where Pm(t) and Qn(t) are polynomials of degree m and n respectively. The solution is then of the form

(50)yp(t)=ts(Aktk++A1t+A0)eαtcos(βt)+ts(Bktk++B1t+B0)eαtsin(βt),

where k is the larger of m and n, and s is defined by the conditions given for (41).

Variable Coefficient Equations

Solving an equation of the form

(51)a2(t)y+a1(t)y+a0(t)y=f(t),

typically we divide both sides by a2 to achieve the standard form

(52)y+p(t)y+q(t)y=g(t),y(t0)=y0,y(t0)=Y1,

where p(t)=a1(t)/a2(t), q(t)=a0(t)/a2(t), g(t)=f(t)/a2(t), and Y0 and Y1 are some constants. Some theorem simply called “Theorem 5” gives a way to test for existence and uniqueness here:

Suppose p(t), q(t), and g(t) are continuous on the interval (a,b) which contains the point t0, then for any choice of initial values Y0 and Y1 there exists a unique solution y(t) on the same interval (a,b) to the initial value problem.

A linear second-order differential equation that can be expressed in the form

(53)at2y+bty+cy=f(t),

where a, b, and c are constants, is called a Gauchy-Euler equation. To solve these we must acquire the characteristic equation, for this we simply substitute tr for y, which gives us

(54)ar2+(ba)r+c=0,

from which we should hopefully be able to derive the values r1 and r2 such that

(55)y1=tr1,y2=tr2.

If r is in the form α±βi then we’ve got complex roots and the solutions will be of the form

(56)y1=tαcos(βlnt),y2=tαsin(βlnt),

and if r is a double root the solutions will be of the form

(57)y1=tr,y2=trlnt.

Assuming y1 and y2 are linearly independent, then we can declare the linear combinations yb=c1y1+c2y2 and yp=v1y1+v2y2. Assuming some interval I where p, q, and g are continuous, by substituting y_p$ we can find (I’m sparing the actual derivation of this because I don’t care):

(58a)y1v1+y2v2=0(58b)y1v1+y2v2=g,

where we specifically chose the first equation so we could avoid v being involved in the equation. There’s a shortcut for finding these solutions that we weren’t allowed to use in class but this isn’t class so here it is:

(59a)v1(t)=g(t)y2(t)W[y1,y2](t)dt,(59b)v2(t)=g(t)y1(t)W[y1,y2](t)dt,

and in case we need to find another solution:

(60)y2(t)=y1(t)ep(t)dty1(t)2dt.

Introduction: The Taylor Polynomial Series

Quick review, this is a Taylor Series:

(61)j=0f(j)(x0)j!(xx0)j,

and a Taylor Polynomial is just the first n terms of this.

Power Series and Analytic Functions

A power series about the point x0 is an expression of the form

(62)f=n=0an(xx0)n=a0+a1(xx0)+a2(xx0)2+.

I’m assuming you’ve taken at least Calculus II, so I’m just going to skip to more relevant information. If the series has a positive radius of convergence ρ, then f is differentiable in the interval |xx0|ρ and termwise differentiation gives us

(63)f=nan(xx0)n1,|xx0|<ρ,

and termwise integration would give us

(64)fdx=(n+1)1an(xx0)n+1+C,|xx0|<ρ.

For reference, the index of the summation is a dummy variable with no bearing outside of the context of the summation, so we can shift it around by constants and maybe variables if you know what you’re doing to clean up the summation a bit.

Power Series Solutions to Linear Differential Equations

A point x0 is called an ordinary point of the equation if both p=a1/a2 and q=a0/a2 are analytic at x0. Otherwise we would call it a singular point. We’re not too concerned about the exact definition of analytic here, so we’ll just use defined and continuous.

There’s this thing called a recurrence relation. By plugging in the generic power series to the differential equation equal to 0xn (the series equivalent to just 0) and shifting all the series to have the generic term xn, you’ll get some equation of an, an+1, and/or an+2, which will allow you to find some relation between an+2 or an+1 and an. By solving for several terms using this relation you can find some pattern by which to define an, a solution that can be plugged into y=anxn to find the actual solution.

When your coefficients are functions, you basically just multiplying the function out termwise and you can simplify from there or whatever.

Definition of the Laplace Transform

The Laplace Transform of f is the function F defined as

(65)L{f(t)}(s)=F(s):=0estf(t)dt.

Sometimes we’ll have to solve these by hand, over odd intervals, for weird functions, especially at first. Eventually we can just use a Laplace Transform Table (there isn’t one on this page, so just Google for it, I suppose).

This transform is linear, so we can say that

(66)L{f1+f2}=L{f1}+L{f2},

and

(67)L{cf}=cL{f}.

The Laplace Transform of f exists when f is piecewise continuous on [0,). The transform is generally pretty useful in simplifying some differential equations into easier algebraic equations.

Properties of the Laplace Transform

A translation in s looks like

(68){eatf(t)}(s)=F(sa).

The Laplace Transform for the derivative of f (assuming f is of exponential order a) for the first and nnth derivatives:

(69a)L{f(t)}(s)=sL{f(t)}(s)f(0),(69b)L{f(n)(t)}(s)=snL{f(t)}(s)sn1f(0)sn2f(0)1f(n1)(0).

For an equation of form tnf(t) we can use the formula

(70)L{tnf(t)}(s)=(1)ndndsnF(s).

The Inverse Laplace Transform is the opposite of the Laplace Transform such that when L{f}=F there exists the inverse L1{F}=f. This is also linear. Keep in mind the method of partial fractions for this. For solving IVPs with the Laplace Transform, apply the transform to both sides of the equation, solve for Y(s) (L{y(t)}=Y(s)), then apply the inverse transform. Also remember to use (69a) (nice) for a lot of problems.