Differential EquationsA (first order) differential equation of the form y' = f(x,y) expresses rate of change of the dependent variable y with respect to a change of the independent variable x as a function f(x,y) of both the independent variable x and the dependent variable y. A solution of the differential equation y' = f(x,y) is a function, which by abuse of language we denote as y(x), such that y'(x) = f(x,y(x)). The existence and uniqueness theorem of first order ordinary differential equations states that if the function f is continuous and satisfies the Lipschitz condition on some closed region U×V, with x in U and y in V, containing the interior point (x0, y0), then there exists a unique solution y(x) and a subinterval U0 containing x0 and contained in U such that y(x0) = y0 and (x,y(x)) is contained in U×V for x in U0. The condition that y(x0) = y0 is called the initial condition. If the function f is differentiable in the closed region, then the Lipschitz condition is automatically satisfied. Furthermore, if the function f is C n, i.e. f has continuous derivatives up to order n, for n > 0, in the closed region, then the solution is C n+1 in a neighborhood of x0. Note that if the Lipschitz condition is not satisfied, then either there does not exist a solution or there may exist more than one solution.
If the function f(x,y) is independent of the dependent variable y, then the solution is
y(x) = y0 + ∫ xx0 f(t) dt, where the integral can be approximated by one of the numerical techniques in the section Numerical Integration.
If the function f(x,y) is independent of the independent variable x, the differential equation y'=f(y) is said to be an autonomous differential equation. The methods given below apply to both autonomous and non-autonomous differential equations but in the autonomous case, the user may want to edit the files and remove references to the independent variable wherever the independent variable is referenced through f.
First Order System of Differential Equations - Higher Order Differential Equation
The dependent variable y can be a vector quantity y in which case we have a system of first order differential equations. The existence and uniqueness theorem stated above immediately generalizes to include the situation in which y is an n-dimensional vector.
An nth order differential equation y(n) = f(x,y,y',y(2),...,y(n-1)) can be transformed to a system of first order differential equations by defining the vector quantity z=(z1,...,zn) by z1=y, z2=y', z3=y(2),...,zn=y(n-1), then z'1=z2, z'2=z3,...,z'n-1=zn, and z'n=f(x,z1,...,zn).
While a second order differential equation can be transfomed to a first order system as described above but because second order differential equations are ubiquitous in physics and engineering special methods have been developed for solving them, see Methods for Second-Order Differential Equations.
Numerical Approximation of the Solutions of a Differential Equation
For certain classes of differential equations, a solution can be found by finding an integrating factor and solving the differential equation exactly or expanding the solution in terms of a Taylor series and summing or (rarely) using Picard's theorem, or expanding the solution in terms of a class of orthogonal functions. Another approach for which the following source code is developed is called the difference method. For the difference method, the solution of the differential equation is approximated at discrete points, usually equally-spaced but not necessarily so. The most commonly used difference methods are Euler's Method,Trapezoidal Method, Midpoint Method, Modified Midpoint Method (Gragg's Method), Runge-Kutta Methods, Predictor-Corrector Methods, and certain adaptive techniques such as the embedded Runge-Kutta methods and the Gragg-Bulirsch-Stoer method.
The question of which method to use to solve a differential equation depends on the application. If one wants to write a general differential equation solver, then one should consider the adaptive methods: the embedded Runge-Kutta methods and the Gragg-Bulirsch-Stoer method. It is possible to adapt the predictor-corrector methods to adaptive techniques. If one wants to solve a particular parametric family of differential equations, and the application is a real-time application in which the time spent solving the equation must be known, then the implicit methods have to be avoided and only the explicit methods considered: Euler's method, the midpoint method, the modified midpoint method, and the Runge-Kutta methods. The questions concerning the step size(s) and order, in the case of Runge-Kutta methods, need to be determined apriori by solving the parametric differential equations for different values of the parameters and choosing the largest step size(s) and smallest order for which the numerical solution is sufficiently accurate. If the application is not real-time then the implicit methods do not have to be avoided and any method can be considered.
yn+1 = yn + h f(xn,yn)for n = 0, 1, ... .
Locally Euler's method is a second order method and therefore globally a first order method. As a rule, Euler's method is only useful for a few steps and small step sizes, however Euler's method together with Richardson extrapolation may be used to increase the order.
The k-step Adams-Bashforth method is the explicit method given by
yn + 1 = yn + h * ( a0 f(xn,yn) + a1 f(xn - 1,yn - 1) + . . . + ak - 1 f(xn - k + 1,yn - k + 1) )where a0, a1, . . . , ak - 1 are constants.
The k-step Adams-Moulton method is the implicit method given by
yn + 1 = yn + h * ( b0 f(xn + 1,yn + 1) + b1 f(xn,yn) + . . . + bk f(xn - k + 1,yn - k + 1) )where b0, b1, . . . , bk are constants. The Adams-Bashforth prediction can be used as an initial estimate to the implicit Adams_Moulton method. The result is then iterated by replacing the old estimate of yn + 1 with the new estimate until the change in the estimates is sufficiently small. If the step size is sufficiently small, this iterative approach is contractive and convergence is quaranteed.