Matrix Functions for Digital Signal Processing. C Code and Octave Script.
1. Abbreviation




2. Introduction
A mathematical model of an object (plant) can be described by a system of differential equations. Let the model be described by a system of linear differential equations:

Equations (16) can be written in a simpler matrix form:


Now let’s consider a one-dimensional (non-matrix) version of equation (19):

where x(t) state function, u(t) control function
The solution of equation (25) has:

In this case the first term of the solution (26):

The second term of the solution:

The solution of the matrix equation (19) has a similar form as the one-dimensional equation (25) and is written as:

Unfortunately it is impossible to write a simple formula for Φ(t,τ) as it was in the one-dimensional case (27). Here are a few properties for this matrix:

If the matrix F in (19) does not depend on time then the matrix equation will take the form:

And solution of equation (38):

Note.
-
Thus in order to find a solution to equation (38) we must be able to calculate the matrix exponential function (40)!
So far we have been considering a continuous model. In the case of DSP it becomes necessary to write down the solution (33) for discrete moments of time:

Then using (33a):

We assume that the control u(t) remains constant for a small Ts time:
![]()
And introducing notation:

Then:
![]()
(47) is used to describe discrete linear dynamical systems in DSP applications!
For the case F independent of time, formulas (46) and (47) will be rewritten:

Conclusions
-
It becomes necessary to calculate matrix functions in DSP tasks for describing dynamic systems
-
The matrix exponential function is an important task for DSP

-
The calculation of the inverse matrix and matrix functions implies the use of square nxn matrices
3. Example
Consider as an example L (induction), R (resistor), C (capacitor) a circuit whose input voltage is u(t) and it is required to calculate the output voltage u_out(t). See the Figure 3-1.

Figure 3-1: LRC circuit
This circuit is described by a second order linear differential equation:

Further introducing the usual notation:

Rewrite (50) as:

Now let’s introduce the variables of the state vector:

And rewrite (52) in matrix form (38):

The solution (41) will take the form:

Take the following values for our example:

Then

Below I will show that:

Then:

Now we solve this task using a discrete form (48). Let’s choose the sampling time:

Equations (59) and (61) show the reaction of our dynamical system on the step function (56). See the Figure 3-1:

Figure 3-1: Digital State Model: x1(t) = u_out(t). Step Response
4. Inverse Matrix
There is a need to calculate the inverse matrix in DSP tasks. See the (58). The inverse matrix was also in the article “Kalman Filter with an example. C code and Octave script” for calculating the Kalman gain:
![]()
The definition of the inverse matrix is given by the formula (7):

The inverse matrix can be calculated by the formula:

Note: The inverse matrix exists for non-degenerate (nonsingular) matrices: det(A) ≠ 0
Using (62)-(66) we can calculate the inverse matrix (58) from our example:
Then

Now let’s check the correctness of the calculations:

4.1. Inverse Matrix. Cayley–Hamilton Method. Faddeev Algorithm.
The characteristic polynomial of the nxn square matrix A is called a polynomial:
Using the Cayley–Hamilton theorem which states:
![]()
It can be proved that the inverse matrix is calculated by the formula:

The formula (71) takes the simple form for a 2×2 matrix:

Using (72) we can calculate (58) again:

The formula (71) takes a complex form for large matrices. It is convenient to use D. K. Faddeev’s recurrent algorithm in this case:

Notes:
-
At the last step n of the Faddeev’s algorithm the matrix Bn is always zero

This result can check of the calculations correctness -
The Faddeev’s algorithm also calculate the determinant of the matrix (75) and its characteristic polynomial (76)
Now we calculate the inverse matrix, determinant and characteristic polynomial by the Faddeev method for the matrix A:


Then:

Note: see the [1] for more information
4.2. Inverse Matrix. Gauss–Jordan Elimination Method
Now introduce elementary operations on the rows of the matrix:
Now let there be some square matrix A for which it is necessary to find the inverse matrix. For example:

For it we compose an auxiliary rectangular matrix:
![]()
And then performing elementary operations on the Help matrix we bring it to the form:
![]()
The matrix B will be the desired inverse matrix for A:
![]()
Let’s continue the example (78):





As a result we get the inverse matrix:

Let’s check the calculations correctness:

5. Matrix Functions
5.1 Matrix Function: exp(At) using Laplace Transformation
We had to calculate the matrix exponential in the above example (57) in which there is a time variable:

To solve this task we will use the Laplace transform:

Here are some images of Laplace functions:

Table 5-1: Table of Originals and Images by Laplace
The Laplace image of the function (83) corresponds to the second line of Table 5-1. Writing it in matrix form:
![]()
Then:

Using the formula (72) we can calculate the inverse matrix (86):

Obviously the inverse Laplace transform from (87) gives us the value of the desired function:

Now we can calculate the inverse Laplace transform for each element of the matrix (88) using lines 5 and 6 from Table 5-1:


Finally we get:

Now we prove the formula (60) using (89)

5.2 Eigenvalues and Eigenvectors of the Matrix. Transformation of Matrices to Diagonal or Jordan Form
Any matrix A:

can be interpreted as an operator that maps vector ‘x’ to vector ‘y’:

If the matrix (operator) A maps the ‘x’ vector into itself then such a vector is called the eigenvector of the matrix (operator) A and the proportionality coefficient is called the eigenvalue of the matrix A:

where x – the eigenvector of the matrix A
λ – the eigenvalue of the matrix A
To find the eigenvalues of the matrix it is necessary to solve the equation:
![]()
It means to find the roots of the characteristic polynomial.
Further solving the matrix equation:

for ‘x’ we find the eigenvectors of the matrix.
Example
Let there be a matrix

Then such a matrix can be converted to a diagonal form while the eigenvalues of the matrix will be located on the diagonal. An auxiliary matrix V is constructed for such transformation the columns of which are the eigenvectors of this matrix:

Using the matrix V we obtain a diagonal matrix:

Or in the back direction:
![]()
Let’s make a matrix V for our example from eigenvectors:

By the formula (72) we calculate the inverse matrix:

In the back direction:

Now the eigenvalues and vectors are calculated for the matrix (60) from the DSP example.





Then

In the back direction:


It means that the characteristic polynomial has multiple (repeated) roots. Then such a matrix can be transformed to Jordan (quasi-diagonal) form. For example, let the matrix have elementary divisors:
![]()
Then the Jordan form will take the form:

Elementary blocks of the Jordan form can be distinguished:

To convert the matrix to Jordan form an auxiliary matrix T is constructed:

The transformation is done according to the same formulas as for the diagonal matrix case:

The algorithm for constructing the matrix T is quite hard and I will not give it here. See the [1] for more information.
5.3 Matrix Functions. Taylor and Maclaurin Series
Matrix functions can be determined using the Taylor and Maclaurin series.
Taylor ‘s series:

Let ‘s write the Maclaurin series in matrix form for few functions:

Take for example a diagonal matrix:

and substitute it into the formula (110):

Similarly substituting (114) into (112) and (113):

In general for an arbitrary function f and a diagonal matrix D:

Example
We can write for the zero matrix:

Example
For an identity matrix:

Now consider Jordan block (matrix) of the form:

Using (109) it can be proved that the matrix function from (121):

Example

Substituting (98) or (108) into (110) we can get a general result:

Example
Using the example for the matrix (95) and the result (100), as well as the formula (124):


5.4 Matrix Function: exp(A). Pade Aproximation
Notes:
-
Using eigenvalues and eigenvectors of the matrix to calculate matrix functions with the (124), (125) is a time-consuming task.
-
Using a Maclaurin (or Taylor) series and limiting it to a finite terms number also requires a lot of computational costs.
The use of Pade Approximation is more effective. For the exponential function this approximation has the form:

Now let’s rewrite (126) in matrix form:
![]()
This approximation works well in the case when the matrix norm (see the (13)-(15)) is small. In the future, we will require that the norm of the matrix is less or equal than to 0.5:
![]()
Note: next I will use the matrix norm 1, see the (13)
If (130) is fulfilled, then the relative approximation error (see the [3], [4]) can be estimated by the formula:

The matrix formula (129) for q=4:

If the matrix has a norm greater than 0.5, then the norm of the matrix decreases as follows:

Finally the algorithm for matrix exponential function calculating will take the form:

Example
Let’s take as an example the matrix from formula (60):


Notes:
-
The result is obtained with good accuracy
-
More information about it you can see in the [3], [4]
6. Conclusions
-
The task of calculating a matrix function arises when solving a system of differential equations (16)
-
Matrix functions are calculated only for square matrices
-
The classical method of calculating a matrix function is performed using formulas (124), (125), which is based on the matrix transform to a diagonal or Jordan form. This method is laborious and difficult to apply in practice
-
If the matrix function depends on time, then the Laplace transform can be applied to calculate the matrix function. See the example using formulas (85)-(89)
-
The simplest way to calculate a matrix function is Pade approximation. See the example for calculating the matrix exponential function with formulas (135) – (137)
-
In the article I did not describe two more ways to determine and calculate a matrix function
-
using the minimal annulling polynomial of the matrix, see the Appendix 1
-
using the residues of a meromorphic function (complex analysis), see the Appendix 2
-
-
Frequent task in matrix arithmetic is the inverse matrix calculation. In this case the Faddeev algorithm is effective, see the (73)-(76). You can use the formula (72) for a simple variant of 2×2 matrices
7. Octave GNU MatrixFunctionsSupport.m
The Octave script supports all plots of the article, generates the tests for the C code and calculates the Matrix Functions.
Script Tests:
% Test 1
% expm(0), exp(I)
% Test 2
% Pade Approximation Example: exp(C)=Rq(C) with q=5
% Without Matrix Norm Check
% Test 3
% The matrices exp(D) and D are commuting
% Test 4
% Faddeev Algorithm Example
% Test 5
% Calculate the inverse matrix
% Support the Gauss Algorithm from the article
% Test 6
% Model of the L,C,R circuit. See the article.
% d^2(u_out(t))/dt^2 + 10 d(u_out(t))/dt + 100 = 100 u(t)
% Variant 1 (test 6)
% Continuous Model
% for the equation: d^2(u_out(t))/dt^2 + 10 d(u_out(t))/dt + 100 = 100 u(t)
% In Laplace form: 100/(s^2+10s+100)
% Variant 2 (test 6)
% Continuous State Model
% for the equation: d^2(u_out(t))/dt^2 + 10 d(u_out(t))/dt + 100 = 100 u(t)
% Variant 3 (test 6)
% Discrete State Model
% for the equation: d^2(u_out(t))/dt^2 + 10 d(u_out(t))/dt + 100 = 100 u(t)
% Test 7
% [0 0.01;-1 -0.1]
% Calculate the matrix eigenvalues
% Transform and Diagonal matrices
% Test 8
% expm([1 2;3 2])
% article example
% Test 9
% Pade approximation exp(x)=Rq(x) for q_val=4
% Scalar Case
% Test 10
% Pade approximation for q_val=4
% Matrix Case
% Article example
8. C Code file MatrixFunctions.c/h
The file is supported the matrix with the 2×2 size.
Functions:
/*
Matrix Inverse
Input
float* matrix_in – pointer on the input float matrix_in[4] with the 2×2 size
float* matrix_out – pointer on the output float matrix_out[4] with the 2×2 size
Note: A11 = matrix[0], A12 = matrix[1], A21 = matrix[2], A22 = matrix[3]
Return void
matrix_out – inverse of the matrix_in: matrix_out = (matrix_in)^-1
Note: if the matrix_in is not invertible (determinant=0)
then the zero matrix_out: [0.0 0.0; 0.0 0.0] will be returned
*/
void MatrixInverseFunc(float* matrix_in, float* matrix_out);
/*
Matrix Add
Input
float* matrix_in1 – pointer on the input float matrix_in1[4] with the 2×2 size
float* matrix_in2 – pointer on the input float matrix_in2[4] with the 2×2 size
float* matrix_out – pointer on the output float matrix_out[4] with the 2×2 size
Note: A11 = matrix[0], A12 = matrix[1], A21 = matrix[2], A22 = matrix[3]
Return void
matrix_out = matrix_in1 + matrix_in2
*/
void MatrixAddFunc(float* matrix_in1, float* matrix_in2, float* matrix_out);
/*
Matrix Subtraction
Input
float* matrix_in1 – pointer on the input float matrix_in1[4] with the 2×2 size
float* matrix_in2 – pointer on the input float matrix_in2[4] with the 2×2 size
float* matrix_out – pointer on the output float matrix_out[4] with the 2×2 size
Note: A11 = matrix[0], A12 = matrix[1], A21 = matrix[2], A22 = matrix[3]
Return void
matrix_out = matrix_in1 – matrix_in2
*/
void MatrixSubFunc(float* matrix_in1, float* matrix_in2, float* matrix_out);
/*
Matrix Multiply
Input
float* matrix_in1 – pointer on the input float matrix_in1[4] with the 2×2 size
float* matrix_in2 – pointer on the input float matrix_in2[4] with the 2×2 size
float* matrix_out – pointer on the output float matrix_out[4] with the 2×2 size
Note: A11 = matrix[0], A12 = matrix[1], A21 = matrix[2], A22 = matrix[3]
Return void
matrix_out = matrix_in1 x matrix_in2
*/
void MatrixMultiplyFunc(float* matrix_in1, float* matrix_in2, float* matrix_out);
/*
Number and Matrix Multiply
Input
float number – number
float* matrix_in – pointer on the input float matrix_in[4] with the 2×2 size
float* matrix_out – pointer on the output float matrix_out[4] with the 2×2 size
Note: A11 = matrix[0], A12 = matrix[1], A21 = matrix[2], A22 = matrix[3]
Return void
matrix_out = number x matrix_in2
*/
void NumberAndMatrixMultiplyFunc(float number, float* matrix_in, float* matrix_out);
/*
Matrix Norm I
Input
float* matrix_in – pointer on the input float matrix_in[4] with the 2×2 size
Note: A11 = matrix[0], A12 = matrix[1], A21 = matrix[2], A22 = matrix[3]
Return float – Matrix Norm I = max(ABS(A11)+ABS(A21), ABS(A12)+ABS(A22))
*/
float MatrixNormI_Func(float* matrix_in);
/*
Matrix Exponential Function
Input
float* matrix_in – pointer on the input float matrix_in1[4] with the 2×2 size
float* matrix_out – pointer on the output float matrix_out[4] with the 2×2 size
Note: A11 = matrix[0], A12 = matrix[1], A21 = matrix[2], A22 = matrix[3]
Return void
matrix_out = exp(matrix_in)
*/
void MatrixExpFunc(float* matrix_in, float* matrix_out);
9. Download the MatrixFunctionsSupport.m and MatrixFunctions.c/h
You can download the files:
MatrixFunctionsSupport.m
MatrixFunctions.c/h
with the button:
10. Literature / References
[1] Gantmaher F. R. “Theory of matrices”, Publishing House: Fizmatlit, ISBN: 978-5-9221-0524-8, 2010
[2] Richard Bellman “Introduction to Matrix Analysis“, Second Edition, SIAM: Society for Industrial and Applied Mathematics, Philadelfia, 1997
[3] Cleve Molery, Charles Van Loan “Nineteen Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five Years Later”, Society for Industrial and Applied Mathematics, SIAM REVIEW 2003, Vol. 45, No. 1
[4] Nick Higham “The Scaling and Squaring Method for the Matrix Exponential Revisited”, Department of Mathematics University of Manchester
[5] Mohinder S. Grewal, Angus P. Andrews “Kalman Filtering: Theory and Practice Using Matlab“, A Wiley-Interscience Publication, John Wiley&Sons Inc., ISBN 0-471-26638-8, 2001
Appendix 1. Calculating a matrix function using a minimal polynomial of the matrix
1.1 The minimal polynomial of the matrix
The polynomial of minimal degree ψ(λ) that annuls the matrix A is called the minimal polynomial of the matrix A:
![]()
Every square matrix by the Cayley–Hamilton theorem satisfies its own characteristic polynomial:
![]()
The minimal polynomial does not always coincide with the characteristic polynomial of the matrix.
Algorithm for finding the minimal polynomial of the matrix A:

Example from [1]

We can get the following result (for more details, see the [1]):


1.2 Definition of a matrix function based on the minimal polynomial of the matrix
Let the matrix A have a minimal polynomial:

It is said that a function f(λ) is given on the spectrum of the matrix A if the values of the function are defined:
![]()
Now let’s select such a polynomial g(λ), whose values on the spectrum of the matrix A coincide with the values of the function f(λ)

Then these matrix functions of A are equal:
![]()
Using this definition it is possible to calculate the function from the matrix in two ways, which will be discussed further on the example of 2×2 matrices
1.3 Calculation of matrix function: Variant 1
Let there be a 2×2 matrix:

Then given that the minimal polynomial for 2×2 matrices always coincides with the characteristic polynomial we can write:

![]()

Solving (A1-20) and (A1-21) with relation to the coefficients of the polynomial ‘c’ and ‘d’, we obtain

Substituting (A1-22) into (A1-20):

Finally:


Solving (A1-26) and (A 1-27) with relation to the coefficients of the polynomial ‘c’ and ‘d’, we obtain

Substituting (A1-28) into (A1-26):
![]()
Finally:

Example
Let there be a matrix:

Next using the formula (A1-24) for the case (A1-19), we get:

Let
![]()
Then

1.4 Calculation of matrix function: Variant 2
In this case the matrix function is calculated by the formula:

Let there be 2×2 matrix:




Substituting (A1-42) and (A1-45) in (A1-39), we get:




Substituting (A1-51) and (A1-54) in (A1-48):

Example
Let there be a matrix

The eigenvalues of this matrix:

Next using the formula (A1-46) for the case (A1-38):

Let
![]()
Then

Let
![]()
Then

Note: Using the theory of matrix functions turns out to be also effective for calculating matrix exponentiation:

Appendix 2. Calculation of a matrix function using the theory of functions of a complex variable
Let the function of the complex variable F(z) be holomorphic (differentiable) in the domain of the complex plane z covered by a closed curve G, where all the eigenvalues of the matrix A are located. Then F(A) is defined by the integral:

This is Cauchy’s integral in matrix form.
Example
Let there be a matrix:

Then

Then

Note
For any function of a complex variable φ(z) the integral over a closed curve is calculated by the formula:
Residue for n order pole can be calculated with the formula:

For a pole of the first order the formula (A2-6) takes the simple form:

In our example (A2-4) there are two poles of the first order:

Now calculate the integral for the first element of the matrix (A2-4):

Then

In result we get for (A2-9):

Similarly calculating integrals for all other elements of the matrix (A2-4) we obtain:

Based on the result (A2-12), we calculate the values of several matrix functions :







