The RungeKutta and Hamming Methods for Numerical Solution of Ordinary Differential Equations with an Example. C Code and Octave Script
1. Abbreviation
2. Introduction
Most differential equations cannot be solved analytically (in exact) form, so various numerical (approximate) solution methods are used. Consider a firstorder differential equation of the form:
At the same time the initial conditions are set:
It means that the Cauchy problem is defined, for which there is a unique solution to equation (5).
We will look for a solution to the function y(t) at discrete points:
It is known that the derivative determines the slope tangent value of the tangent to the function at a given point. Based on this and the formulas (2), (3), (5) and Figure 21 can be written the Euler’s formula:
Thus it may seem that we can solve the differential equation (5) using formula (11):
Unfortunately this method does not always provide a stable solution and computation errors can be increased during calculations at each new step and lead to an incorrect result. For more information see [1]. The method provides low accuracy (10 a). The following are stable methods for solving differential equations from RungeKutta, Hamming, Adams and model.
Figure 21: Euler Formula Explanation
Now let there be an norder differential equation:
This equation can be reduced to a system of firstorder differential equations by introducing new auxiliary functions:
and these equations are solved in parallel.
In the article “Control of Non Linear Object Using the Vostrikov’s Localization Method. C Code and Octave Script” considered a control object described by a nonlinear differential equation:
where
u(t) – input signal
y(t) – output signal
Next we will solve the Cauchy problem for differential equation (14) using RungeKutta, Hamming and model methods as a practical example with:
and the initial conditions:
Let’s imagine (14), (16) as a system of firstorder differential equations, introducing the auxiliary function z(t):
with the initial conditions:
Let’s take the input signal u(t):
Let’s choose the sampling step for the time function:
3. RungeKutta Method
3.1 Description of the RungeKutta Method
The 4 stages RungeKutta algorithm is most often used providing O(h^5) accuracy at each step.
Let it be given:
Then the next value of “y” is calculated using the formulas:
The easiest way to explain this method is geometrically. Four slope tangent values of the tangents to the desired function “y” are calculated:
and we average these values with weights: 1/6, 1/3, 1/3, 1/6 according to the Simpson formula (see the Appendix 1) and after which we calculate Δy:
As a result we get the following value “y”:
3.2 Example of the RungeKutta Method
Now let’s apply this method for our example (17), (18):
and initial conditions:
Input signal u(t):
Let’s choose the sampling step for the time function:
And let the values of the functions have already been calculated in step (n1):
Then the values “y” and “z” for step “n” are calculated:
Figure 31 shows the solution “y” for (17), (18) by the RungeKutta method:
Figure 31: Solution (17), (18) of the RungeKutta Method with h=Ts=0.01s
Note: See the program code in NumericalDifferentialEquation.m/c/h
4. Hamming Method
4.1 Description of the Hamming Method
The solution of the differential equation (5), (6) by the Hamming method (see the [1]) is built on two steps:

predictor
Using (25) the prediction “y” is calculated. Next the prediction for the derivative is calculated from (5):
this prediction is substituted into formula (28) to calculate the correction value of the “y” 
corrector
Next we compensate for the error (29) and get the result:
Note: (25) – (31) provide accuracy of the O(h^5)
But here I will consider a simplified version of the Hamming method with an accuracy of the O(h^3), which is also given in [1]

predictor

corrector
As a result taking into account (33) and (36):
Note: Proof of the (33) and (36) see in the Appendix 2.
Two points of prehistory are needed to start the algorithm:
To calculate (40) can be used several terms of the Taylor series:
Formulas (41), (42) contain unknowns:
From (5) is be calculated (43):
Note: (40) can be also calculated with the RungeKutta method
4.2 Example of the Hamming Method
Now let’s apply this method for our example (17), (18):
with the initial conditions:
Input signal u(t):
Let’s choose the sampling step for the time function:
Let the values of the functions have already been calculated in step (n1):
Then the solution for the next step n is calculated:
Before starting the algorithm it is necessary to calculate:
The (47) values can be calculated using formulas:
From (14) and (16) we can write:
To obtain the following derivatives is differentiated (14) several times:
Then:
Now substitute these values (51) in (48) and get the solution for (47):
Note: The solution (47) can also be found using the RungeKutta method
Figure 41 shows the solution “y” for (17), (18) by the Hamming method, and Figure 42 shows the error estimate at each step of the calculation:
Figure 41: Solution (17), (18) of the Hamming Method with h=Ts=0.01
Figure 42: Step Error of the Solution (17), (18) of the Hamming Method with h = Ts = 0.01s
Using the Hamming method can be estimated the absolute calculation error at each step and if the error is large, can be repeated the calculations at the current step, reducing the sampling step and thereby increasing the accuracy of the calculations.
Note: See the program code in NumericalDifferentialEquation.m/c/h
5. Conclusions

The RungeKutta method (see the [1], [3]) is easy to implement and provides O(h^5) accuracy at each step. The method is based on calculating several tangent values of the slope tangents to the desired function “y” at the current step

The Hamming Method (see the [1]) is based on the prediction – correction principle and uses the prehistory obtained from the previous steps. The method also provides accuracy of O(h^5) at each step, but in the article I gave a simplified version with accuracy of O(h^3). This variant can also be applied in practice by choosing only a smaller step h. The advantage of the method is an estimate of the calculation error and if the error is large, then calculation of the current step can be repeated with a smaller h
In Figure 51, solutions (17), (18) are constructed together using both methods: RungeKutta and Hamming. The graphs completely match and prove that the step h = 0.01 is chosen correctly for both methods.
Figure 51: The Solution (17), (18) of the RungeKutta and Hamming Methods together with h = Ts = 0.01s 
Now in highprecision applications the Adams Methods are most often used to solve differential equations (see the [2]). It provides a stable solution to the differential equation with any required accuracy. For a description of the method, see the Appendix 3. Nevertheless the above two methods (RungeKutta and Hamming) can also be effectively used in simple applications while accuracy can be ensured by choosing the sampling step h

In the article “Control of Non Linear Object Using the Vostrikov’s Localization Method. C Code and Octave Script” the differential equation (17), (18) was solved using a model based on integrators. See the Figure 52
Figure 52: The Model Solution of the (17), (18)
Figure 53 shows the results of solving the differential equation (17), (18) by RungeKutta and the model methods at h = Ts = 0.01 s. It is obvious that the solution by the model method has low accuracy. This means that the sample value is not suitable for the model. An acceptable result for the model can be obtained at h = Ts < 0.001s. For more information, see the article “Control of a nonlinear object using the Vostrikov localization method. C code and octave script”
Figure 53: The Solution (17), (18) of the RungeKutta and Model Methods together with h = Ts = 0.01s
6. Octave GNU file NumericalDifferentialEquation.m
The .m script generates the (17), (18) solution with the RungeKutta, Hamming and Model methods, step signal and plots. The following functions are defined in this file:
% Numerical Solution Of the Differential Equation with the Runge Kutta Method
% y”(t) = a_coef_0*y^2(t) + a_coef_1*y'(t) + b_coef*u(t)
% Define the new function z(t)
% y'(t) = z(t)
% z'(t) = a_coef_0*y^2(t) + a_coef_1*z(t) + b_coef*u(t)
% Input parameters:
% u_input – input signal
% a_coef_1 – object parameter
% a_coef_0 – object parameter
% b_coef – object parameter
% y_start – begin value of the y output
% y_deriv1_start – begin value y’ output
% Ts – sample time
% Output parameter:
% y_output – output
function [y_output] = RungeKuttaOfDifferentialEquation(u_input, a_coef_1, a_coef_0, b_coef, y_start, y_deriv1_start, Ts)
% Numerical Solution Of the Differential Equation with the Hamming Method
% y”(t) = a_coef_0*y^2(t) + a_coef_1*y'(t) + b_coef*u(t)
% Define the new function z(t)
% y'(t) = z(t)
% z'(t) = a_coef_0*y^2(t) + a_coef_1*z(t) + b_coef*u(t)
% Input parameters:
% u_input – input signal
% a_coef_1 – object parameter
% a_coef_0 – object parameter
% b_coef – object parameter
% y_start – begin value of the y output
% y_deriv1_start – begin value y’ output
% y2 – y_output(2) for step 2
% y2_deriv1 – y’ for step 2
% Ts – sample time
% Output parameter:
% y_output – output
function [y_output y_error] = HammingOfDifferentialEquation(u_input, a_coef_1, a_coef_0, b_coef, y_start, y_deriv1_start, y2, y2_deriv1, Ts)
% Numerical Model Solution Of the Differential Equation:
% y”(t) = a_coef_0*y^2(t) + a_coef_1*y'(t) + b_coef*u(t)
% Input parameters:
% u_input – input signal
% a_coef_1 – object parameter
% a_coef_0 – object parameter
% b_coef – object parameter
% y_start – begin value of the y output
% y_deriv1_start – begin value y’ output
% Ts – sample time
% Output parameter:
% y_output – output
function [y_output] = ModelSolutionOfDifferentialEquation(u_input, a_coef_1, a_coef_0, b_coef, y_start, y_deriv1_start, Ts)
% Float Table to File
% Support CCode
% Input parameters:
% ParameterVector – data array
% FileNameString – output file name
function FloatParamVector2file(ParameterVector, FileNameString)
7. C Language NumericalDifferentialEquation.c/h
The demo SW is realized the (17), (18) solution with the RungeKutta, Hamming and Model methods. See the files NumericalDifferentialEquation.c/h
Functions:
/*
Differential Equation Initialization
Initialization of the working arrays
Input: void
Return: void
*/
void DifferentialEquationInit(float y_start, float y_deriv_start, float y1, float y1_deriv);
/*
RungeKutta Method
Input
float u_control – control value
RungeKutta_data_s* history_ptr – previous history
Return
float – y_RungeKutta output value
*/
float RungeKuttaOfDifferentialEquation(float u_control, RungeKutta_data_s* history_ptr);
/*
Hamming Method
Input
float u_control – control value
Hamming_data_s* history_ptr – previous history
float* y_error_ptr – current step error
Return
float – y_Hamming output value
*/
float HammingOfDifferentialEquation(float u_control, Hamming_data_s* history_ptr, float* y_error_ptr);
/*
Bilinear Integrator for Model Solution
Laplace: Integrator(s) = 1/s
From bilinear transform => Integrator(z) = (Ts/2)(1 + z^1)/(1 – z^1)
y(n)=y(n1)+(Ts/2)(x(n)+x(n1))
Input
float input – input
model_integrator_data_s* history_ptr – integrator history
Return
float – output
*/
float bilinear_integrator(float input, model_integrator_data_s* history_ptr);
/*
Model Solution
Input
float u_control – control value
Return
float – y_model output value
*/
float ModelSolutionOfDifferentialEquation(float u_control, model_integrator_data_s* history1_ptr, model_integrator_data_s* history2_ptr);
8. Download the NumericalDifferentialEquation.c/h/m
You can download the files:
NumericalDifferentialEquation.m
NumericalDifferentialEquation.c
NumericalDifferentialEquation.h
with the button:
9. Literature / References
[1] Richard W. Hamming “Numerical Methods for Scientists and Engineers (Dover Books on Mathematics)”, Verlag: Dover Publications Inc., ISBN 10: 0486652416 / ISBN 13: 9780486652412, 1987
[2] Adams Numerical Method: Ordinary Differential Equations Solutions with the Linear Multistep Methods
[3] Runge–Kutta methods
Appendix 1. Simpson’s rule
Let’s take a quadratic function:
Now let’s find such coefficients:
in which the equality is true:
Substitute (A11) into (A13):
Solving the integral on the right side:
Then:
Substituting (A16) into (A13) is gotten the Simpson formula with weights 1/6 2/3 1/6:
Notes:

The formula (A17) gives an accurate result for the functions:
As well as for linear combinations of these functions:
In other cases the Simpson formula gives an approximate value

The RungeKutta algorithm uses weights 1/6 1/3 1/3 1/6 in formula (22), since there are two different estimates for f((a+b)/2)
Appendix 2. Hamming Errors (33), (36)
Figure A21: The Example of the Function y(t): y_n2, y_n1, y_n
Let’s prove that the estimate:
Gives an error:
Proof
Let’s write down three terms of the Taylor series with a remainder term:
at the same time we will assume that:
Add (A23) and (A24) taking into account (A25):
Then:
Subtract (A24) from (A23) taking into account (A25):
As a result:
Thus the error value (A22) was obtained
Now let’s prove that the estimate:
Gives an error:
Proof
Let’s write down three terms of the Taylor series with a remainder term:
at the same time we will also assume that:
Subtract (A210) from (A211) taking into account (A212):
At the same time:
From (A213) and (A214):
Thus proved (A29)
Appendix 3. Adams Method
Consider the Cauchy problem for a firstorder differential equation:
This equation can be solved using a combination of two Adams methods:

The explicit AdamsBashforth Methods
TwoStep Method
ThreeStep Method
… 
Implicit AdamsMoulton Methods:
TwoStep Method
ThreeStep Method
…where
For example: let’s use the formula (A35) to solve (A31):

Before starting the algorithm, it is necessary to know/calculate:

To start iterations using the formula (A35), it is necessary to calculate the iteration initial value:
This start iteration value is calculated using the formula (A33) 
The iterations are performed using (A35) until the required accuracy is achieved :

Then the value of the next “y” is calculated again using points 2 and 3