Mathematics Source Library
C & ASM


Differential Equations

Differential Equations

A (first order) differential equation of the form y' = f(x,y) expresses rate of change of the dependent variable y with respect to a change of the independent variable x as a function f(x,y) of both the independent variable x and the dependent variable y. A solution of the differential equation y' = f(x,y) is a function, which by abuse of language we denote as y(x), such that y'(x) = f(x,y(x)). The existence and uniqueness theorem of first order ordinary differential equations states that if the function f is continuous and satisfies the Lipschitz condition on some closed region U×V, with x in U and y in V, containing the interior point (x0, y0), then there exists a unique solution y(x) and a subinterval U0 containing x0 and contained in U such that y(x0) = y0 and (x,y(x)) is contained in U×V for x in U0. The condition that y(x0) = y0 is called the initial condition. If the function f is differentiable in the closed region, then the Lipschitz condition is automatically satisfied. Furthermore, if the function f is C n, i.e. f has continuous derivatives up to order n, for n > 0, in the closed region, then the solution is C n+1 in a neighborhood of x0. Note that if the Lipschitz condition is not satisfied, then either there does not exist a solution or there may exist more than one solution.


If the function f(x,y) is independent of the dependent variable y, then the solution is
y(x) = y0 + ∫ xx0 f(t) dt, where the integral can be approximated by one of the numerical techniques in the section Numerical Integration.


If the function f(x,y) is independent of the independent variable x, the differential equation y'=f(y) is said to be an autonomous differential equation. The methods given below apply to both autonomous and non-autonomous differential equations but in the autonomous case, the user may want to edit the files and remove references to the independent variable wherever the independent variable is referenced through f.


First Order System of Differential Equations - Higher Order Differential Equation
The dependent variable y can be a vector quantity y in which case we have a system of first order differential equations. The existence and uniqueness theorem stated above immediately generalizes to include the situation in which y is an n-dimensional vector.
An nth order differential equation y(n) = f(x,y,y',y(2),...,y(n-1)) can be transformed to a system of first order differential equations by defining the vector quantity z=(z1,...,zn) by z1=y, z2=y', z3=y(2),...,zn=y(n-1), then z'1=z2, z'2=z3,...,z'n-1=zn, and z'n=f(x,z1,...,zn).
While a second order differential equation can be transfomed to a first order system as described above but because second order differential equations are ubiquitous in physics and engineering special methods have been developed for solving them, see Methods for Second-Order Differential Equations.

Numerical Approximation of the Solutions of a Differential Equation
For certain classes of differential equations, a solution can be found by finding an integrating factor and solving the differential equation exactly or expanding the solution in terms of a Taylor series and summing or (rarely) using Picard's theorem, or expanding the solution in terms of a class of orthogonal functions. Another approach for which the following source code is developed is called the difference method. For the difference method, the solution of the differential equation is approximated at discrete points, usually equally-spaced but not necessarily so. The most commonly used difference methods are Euler's Method,Trapezoidal Method, Midpoint Method, Modified Midpoint Method (Gragg's Method), Runge-Kutta Methods, Predictor-Corrector Methods, and certain adaptive techniques such as the embedded Runge-Kutta methods and the Gragg-Bulirsch-Stoer method.


The question of which method to use to solve a differential equation depends on the application. If one wants to write a general differential equation solver, then one should consider the adaptive methods: the embedded Runge-Kutta methods and the Gragg-Bulirsch-Stoer method. It is possible to adapt the predictor-corrector methods to adaptive techniques. If one wants to solve a particular parametric family of differential equations, and the application is a real-time application in which the time spent solving the equation must be known, then the implicit methods have to be avoided and only the explicit methods considered: Euler's method, the midpoint method, the modified midpoint method, and the Runge-Kutta methods. The questions concerning the step size(s) and order, in the case of Runge-Kutta methods, need to be determined apriori by solving the parametric differential equations for different values of the parameters and choosing the largest step size(s) and smallest order for which the numerical solution is sufficiently accurate. If the application is not real-time then the implicit methods do not have to be avoided and any method can be considered.


Euler's Method

Given an initial value problem y ' = f(x,y); y(x0) = y0 and a step size h, Euler's method approximates the derivative of the solution of the initial value problem at x by y '(x) = ( y(x+h) - y(x) ) / h. The approximation yn for y(x0+nh) is then given recursively by
yn+1 = yn + h f(xn,yn)
for n = 0, 1, ... .
Locally Euler's method is a second order method and therefore globally a first order method. As a rule, Euler's method is only useful for a few steps and small step sizes, however Euler's method together with Richardson extrapolation may be used to increase the order.

Trapezoidal Method

The trapezoidal method is an implicit method which is locally a third order method and therefore globally a second order method.

Midpoint Method

The midpoint method also called the leapfrog method is a two-step method which is also locally a third order method or globally a second order method. Being a two-step method, the midpoint method requires two starting values to start the recursion. Usually this means that another method must be used to create the second starting value. Small perturbations of the initial conditions lead to growing oscillations of the errors, for this reason the use of the midpoint method should be avoided.

Modified Midpoint Method (Gragg's Method)

Gragg observed that a slight modification to the midpoint method would obviate the term giving rise to the growing error oscillations.

Runge-Kutta Methods

Runge-Kutta methods are one-step methods which evaluate the integrand several times at judicious points in a neighborhood of the initial point (x0, y0) and then combines the results to form an approximation to y at x0 + h.

Adams-Bashforth Predictor Adams-Moulton Corrector

The Adams-Bashforth method and the Adams-Moulton method are multistep methods. A k-step method is one in which the approximation to the next value uses not only the current value of the dependent variable but also the previous k - 1 values.


The k-step Adams-Bashforth method is the explicit method given by
yn + 1 = yn + h * ( a0 f(xn,yn) + a1 f(xn - 1,yn - 1) + . . . + ak - 1 f(xn - k + 1,yn - k + 1) )
where a0, a1, . . . , ak - 1 are constants.


The k-step Adams-Moulton method is the implicit method given by
yn + 1 = yn + h * ( b0 f(xn + 1,yn + 1) + b1 f(xn,yn) + . . . + bk f(xn - k + 1,yn - k + 1) )
where b0, b1, . . . , bk are constants. The Adams-Bashforth prediction can be used as an initial estimate to the implicit Adams_Moulton method. The result is then iterated by replacing the old estimate of yn + 1 with the new estimate until the change in the estimates is sufficiently small. If the step size is sufficiently small, this iterative approach is contractive and convergence is quaranteed.

Embedded Runge-Kutta Methods

Embedded Runge-Kutta methods are used to estimate the not only the solution of an initial value problem but also the error. These methods are used in adaptive methods which control the step size so that the final answer will be within a preassigned tolerance.

Gragg-Bulirsch-Stoer Method

The Bulirsch-Stoer method is an adaptive method which uses Gragg's modified midpoint method to estimate the solution of an initial value problem for various step sizes. The estimates are fit a "diagonal" rational function as a function of the step size and the limit as the step size tends to zero is taken as the final estimate. This technique can be extremely useful especially in the neighborhood of poles or other singularities.

Methods for Second-Order Differential Equations

Second order differential equations frequently arise in Physics and Engineering problems. The fundamental equations of electrodynamics, quantum mechanics, and even classical mechanics are cast as partial differential equations which when solved frequently lead to second order ordinary differential equations. Although it is possible to transform any n th order ordinary differential equation to a system of n first order equations, numerical methods have been developed to deal with second order ordinary differential equations directly.