Background
There are ordinary differential equations and partial differential
equations which differ in that the formal is with respect to (wrt) a single
variable where as the latter may be of multiple variables or a single variable
of a multivariate function. The general form is
and any differential equation that does not meet this form is considered to be
nonlinear.
Solutions & Initial Value Problems
The general form of a function where is the independent and is the
dependent variable is
or
We then define some function that when substituted for in either
or for all on some interval is called an
explicit solution to the equation on . That sounds really cool, but at
the moment it doesn’t really mean anything.
On the other hand, a relation is said to be an implicit
solution to on the interval if it defines one or more explicit
solutions. This often defines multiple solutions because of the constant term
that is created through integration.
An initial value problem (IVP) for an nth-order differential
equation () will ask you to find a solution to the interval that
satisfies
where and are given constants. [11] To
determine the existence and uniqueness of a solution we use Picard’s
Theorem:
Consider the initial value problem and , if
and are continuous functions in some rectangle that contains the point
, then the IVP has a unique solution in some interval
, where .
The Approximation Method of Euler
The approximation method of Euler is super tedious to do by hand and so
should be completely avoided unless you’re using it for numerical analysis /
programming. To approximate repeat the following for all steps :
where is the step size.
Separable Equations
A differential equation is separable when it can be expressed as
which you would solve as
and given some function this would be expressed as
Linear Equations
The general equation for a linear first-order differential equation is
We find two scenarios in which the solution to is simple. First
when :
Second, the slightly less straightforward solution for when ,
which would mean that the LHS can be written as the derivative of a product:
Of course, it is rarely so convenient as to be in this form on its own, so
we need to find a way to rewrite the equation so it fits the criteria of the
second solution. We do this by multiplying by some magical function .
First step to achieving this is to write the equation in standard form:
where and . From here we can determine
that we need to be some function such that
is the derivative of the product :
Obviously, there’s only one good solution to :
Somehow (since was created by dividing by
and was calculated to be a function that when multiplied by
would satisfy , can be substituted for
in ) this allows us to find the general solution:
Exact Equations
The differential form
is said to be exact in some rectangle if there is a function
such that
and
for all in . If the aforementioned form is exact then
is called an exact equation. Exactness can be tested by using the equation
Method for solving differential equations that are exact:
-
If is exact then which can be
integrated wrt yielding
-
Differentiate wrt finding
-
Integrate to obtain and substitute it into to
find .
-
The solution is given implicitly by
This can also be done using and an integral wrt in step 1 instead
of and wrt .
The Mass-Spring Oscillator
The differential equation that describes the motion of a mass on a spring
is
where is the mass of the mass, is the damping coefficient of the
spring, and is the spring constant (stiffness) of the spring. When
there is a solution in the form , where is the
frequency defined by .
Homogeneous Linear Equations: The General Solution
The next few sections are all about second order differential equations,
meaning that they are of the form
By looking at this we should be able to tell that the second derivative needs to
be able to be expressed as a linear combination of the first and zeroth. This
suggests we try to find solutions of the form . By substituting this in
to the general form we find
Since we can divide by it in . is
called the auxiliary or characteristic equation.
The trivial solution () is always a solution. If we have any
pair of solutions and we can construct an infinite number of linear
combinations that are also solutions (the proof of this is just typical
superposition stuff, so trust me and/or try it out for yourself):
The two degrees of freedom ( and ) imply that two consitions can be
imposed, such as and in an initial value problem. This will lead
to systems of linear equations, so get ready for that.
This can also be used to prove uniqueness. For any real numbers ,
, , , , and , there exists a unique solution to the initial
value problem
which is valid for all in . A particularly important case
is that od , when must be intically zero ().
Really important things happen if and are linearly dependent, then
and so
which is only one constant () and thus only one degree of freedom. As such we
need linearly independent functions for and , so definitely not just
like constant multiples of each other. Then we’re going to need a way to tell if
things are linearly independent or not.
For any real numbers , , and , if and are any
two solutions to the differential equation in
and the equality
holds for any point , then and are linearly dependent in
. Fun fact, the LHS in is called the
Wronskian of and at , which is written like this:
Auxiliary Equations with Complex Roots
When then the roots to the auxiliary equation are imaginary:
so we end up with . We
know what is because it matches the form but what about
the term? Well, we’re going to do some fun math:
This is known as Euler’s Formula because as mathematicians it is our job to
suck Euler’s dick 24/7 and name everything he touched after him. Plugging this
back into the auxiliary equation we get
which still has so it isn’t a “real” solution. So, imposing societal
standards on it through some mathematical wizardry I’m not 100% certain about it
we find the new form:
Nonhomogenous Equations: The Method of Undetermined Coefficients
Both the book and my class went through this whole bit they called “judicial
guessing” which is just glorified trial and error to find some method by which
solutions could be found. I’ll be skipping that because lmao fuck off.
To find a particular solution to the differential equation
where is a non-negative integer, we use the form
where is if is not a root, is is a simple root, and if
is a double root of the auxiliary equation. To find a particular solution to
the differential equation
for , we use the form
where is if is not a root and if
is a root of the auxiliary equation.
The Superposition Principle
The superposition principle goes like: Let by the solution to
and be the solution to
Then, for any constants and , the function is a
solution to
Onto the topic of existence and uniqueness: for any real numbers ,
, , , , and , suppose is a particular solution in
an interval where and that and are linearly
independent solutions to the homogeneous equation in , then there exists a
unique solution in to the initial value problem
and said solution is given by the form
for the appropriate choice of constant and . Now, the
superposition principle and the method of undetermined coefficients make sweet,
sweet love and birth a hybrid, which looks something like this:
where is a polynomial of degree . The solution then is of the form
where is determined by the same method as previously. It could also look
something like this:
where and are polynomials of degree and respectively.
The solution is then of the form
where is the larger of and , and is defined by the conditions
given for .
Variable Coefficient Equations
Solving an equation of the form
typically we divide both sides by to achieve the standard form
where , , , and
and are some constants. Some theorem simply called “Theorem 5” gives a way
to test for existence and uniqueness here:
Suppose , , and are continuous on the interval
which contains the point , then for any choice of initial values
and there exists a unique solution on the same interval
to the initial value problem.
A linear second-order differential equation that can be expressed in the
form
where , , and are constants, is called a Gauchy-Euler equation. To
solve these we must acquire the characteristic equation, for this we simply
substitute for , which gives us
from which we should hopefully be able to derive the values and such
that
If is in the form then we’ve got complex roots and the
solutions will be of the form
and if is a double root the solutions will be of the form
Assuming and are linearly independent, then we can declare the
linear combinations and . Assuming some
interval where , , and are continuous, by substituting y_p$ we can
find (I’m sparing the actual derivation of this because I don’t care):
where we specifically chose the first equation so we could avoid being
involved in the equation. There’s a shortcut for finding these solutions that we
weren’t allowed to use in class but this isn’t class so here it is:
and in case we need to find another solution:
Introduction: The Taylor Polynomial Series
Quick review, this is a Taylor Series:
and a Taylor Polynomial is just the first terms of this.
Power Series and Analytic Functions
A power series about the point is an expression of the form
I’m assuming you’ve taken at least Calculus II, so I’m just going to skip to
more relevant information. If the series has a positive radius of
convergence , then is differentiable in the interval
and termwise differentiation gives us
and termwise integration would give us
For reference, the index of the summation is a dummy variable with no
bearing outside of the context of the summation, so we can shift it around by
constants and maybe variables if you know what you’re doing to clean up the
summation a bit.
Power Series Solutions to Linear Differential Equations
A point is called an ordinary point of the equation if both
and are analytic at . Otherwise we would call it a
singular point. We’re not too concerned about the exact definition of
analytic here, so we’ll just use defined and continuous.
There’s this thing called a recurrence relation. By plugging in the
generic power series to the differential equation equal to
(the series equivalent to just ) and shifting all the series to have the
generic term , you’ll get some equation of , , and/or
, which will allow you to find some relation between or
and . By solving for several terms using this relation you can
find some pattern by which to define , a solution that can be plugged into
to find the actual solution.
When your coefficients are functions, you basically just multiplying the
function out termwise and you can simplify from there or whatever.
The Laplace Transform of is the function defined as
Sometimes we’ll have to solve these by hand, over odd intervals, for weird
functions, especially at first. Eventually we can just use a Laplace Transform
Table (there isn’t one on this page, so just Google for it, I suppose).
This transform is linear, so we can say that
and
The Laplace Transform of exists when is piecewise continuous on
. The transform is generally pretty useful in simplifying some
differential equations into easier algebraic equations.
A translation in looks like
The Laplace Transform for the derivative of (assuming is of exponential
order ) for the first and nnth derivatives:
For an equation of form we can use the formula
The Inverse Laplace Transform is the opposite of the Laplace Transform
such that when there exists the inverse . This is also linear. Keep in mind the method of partial fractions for this. For solving IVPs with the Laplace Transform, apply the transform to
both sides of the equation, solve for (),
then apply the inverse transform. Also remember to use (nice) for
a lot of problems.