Branch and Bound Method for Travelling Salesman Problem Using Octave GNU Tool
.
1. Travelling Salesman Problem
The traveling salesman problem is formulated as follows. There are n cities. The traveling salesman must visit each city once and return to the first city at the end. At the same time the total cost (distance) traveled should be minimal. Solving the problem by a direct search of all possible options for bypassing cities gives a complexity of n!, which corresponds to the Horrible algorithm.
A similar problem was solved by the English mathematician Hamilton. He considered a dodecahedron (a regular polyhedron consisting of twelve pentagonal faces, thirty edges and twenty vertices). The peaks were considered as cities. Moving along the edges, it was necessary to bypass all the cities once and return to the original city. Such routes are called Hamiltonian cycles. Bypassing not all cities or visiting one city more than once leads to non-Hamiltonian cycles. I will use the terminology of Hamiltonian cycles below.
Mathematically the traveling salesman problem can be described as follows. Let’s take for example 4 cities, the distance between which is shown in Figure 1-1:

Figure 1-1: Four Cities and the Distances between them
.
The distance between cities can be set as a matrix:

For example, for Figure 1-1 the matrix will look like this:

The matrix is symmetric:
![]()
And there are Infinity diagonally:
![]()
Notes:
-
The traveling salesman problem has already been discussed in the articles: “Greedy Algorithm and Travelling Salesman Problem Using Octave GNU Tool” and “Dynamic Programming and Travelling Salesman Problem. C Code and Octave Script”
-
To write this article, I used [1]
.
2. Branch and Bound for Travelling Salesman Problem
The branch and boundary method was developed in 1960 by Ailsa Land and Alison Doig. The method is based on lower estimates of the cost (distance) of the route for various sets of possible solutions. At the beginning, the lower bound of the route cost is estimated for the set of all Hamiltonian contours (cycles). Then all possible solutions are divided into two subsets. The first subset consists of Hamiltonian cycles involving an edge:
![]()
the transition from city i to city j. The second subset consists of Hamiltonian cycles that do not include this edge. Such a subset is denoted by:
![]()
For each of these subsets, the lower limit of the route cost (distance) is determined again. Comparing these boundaries we select a promising subset with a minimum lower bound. The selected subset is again divided into two subsets, similar to the previous step, and the lower bounds of these subsets are evaluated again. The process of partitioning subsets is accompanied by the construction of a tree. See the Figure 2-2, Figure 3-1 below.
This partitioning continues until one or more sets of optimal edges remain, from which one or more optimal Hamiltonian cycles can be constructed.
Now let’s go back to the example with the distance matrix (1) and find the optimal routes. Let’s rewrite/represent this matrix (1) in a more convenient form. We will write the column numbers above the table, and the row numbers to the left of the matrix:

Step 0. The lower bound of the cost (distance) of the route is estimated for the set of all Hamiltonian contours (cycles). This estimate is obtained by reducing the matrix. The minimum values in each row are calculated. These minimum values are recorded in an additional column h_i_min, temporarily added to the matrix:

Let’s sum up these minimum values:

Now we subtract these minimum values from the corresponding rows of the matrix. After that the minimum values of the columns of the newly obtained matrix are determined. To do this, the row h_j_min is temporarily added to the matrix:

Let’s sum up these minimum values:

Let’s subtract these minimum values from the corresponding columns of the matrix. Now there is at least one zero element in each row and column of the matrix. Penalties are introduced for zero elements of the matrix. They are calculated according to the following rule. Let the element of the matrix (i, j) be zero, then the penalty is calculated as the sum of the minimum element in row i and the minimum element in column j. The values of these penalties are indicated in the matrix next to the zero elements in parentheses. The penalty shows the lengthening of the route if we do not use this zero element/edge in our solution. These penalties will be used below to select promising edges.

The lower bound is calculated as the sum of the minimum values in rows and columns:

Now let’s move on to the next step. We need to choose a promising edge that can be part of the optimal solution. Such a solution is selected from the zero elements of the matrix (6) with the maximum penalty. Now there are several zero elements in the matrix with the same penalty. Let’s choose the first of them, edge (1,2).
Now let’s consider a subset of solutions without edge (1,2). The lower bound of this subset is calculated as the sum of the lower bound of the previous step (Step0) and the edge penalty (1,2):
![]()
Note
-
In the software implementation the result (8) is the addition of the Infinity value to the position (1,2) of the matrix, and then by reducing the matrix, we obtain the lower bound (8)
.
Now let’s consider a subset of all solutions with edge (1,2). Delete the first row from the matrix to exclude the repeated selection of the outgoing city 1 in the following steps, and also delete the second column to exclude the incoming in city 2. It is also important to exclude a non-Hamiltonian cycle from the solution. It’s a loop: 1 => 2 => 1. For this simple case, we set the inverse edge (2,1) to infinity (we assign the value Infinity to the element of the matrix (2,1)). Next we reduce the newly obtained matrix:

Calculating the lower bound for edge (1,2):

Note
-
A non-Hamiltonian cycle is not always eliminated by setting the edge (j,i) to infinity, which means that the reverse edge will not always be (j,i) for edge (i,j). To correctly determine a non-Hamiltonian cycle, it is necessary to consider all the edges already found for the corresponding part of the tree. We are trying to build a connected path from these edges. For example, see Figure 2-1:

Figure 2-1: Tied Path with the edge (m,k)
.
In this case the reverse edge will be (m,k) and it must be set to Infinity. If such a connected path cannot be constructed, then (j,i) is taken as the reverse edge, which is set to Infinity.
.
Now we calculate the new penalties for the newly obtained reduced matrix:

We select the edge with the maximum penalty (2,4).
Now let’s consider a subset of solutions without edge (2,4). The lower bound of this subset is calculated as the sum of the lower bound of the previous step (the step associated with edge (1,2)) and the edge penalty (2,4):
![]()
Now let’s consider a subset of all solutions with edge (2,4). Delete the second row from the matrix, and also delete the fourth column. We also exclude a non-Hamiltonian contour (cycle) from the solution. In this step, it’s a loop: 1 => 2 => 4 => 1. Let’s set the reverse edge (4,1) to infinity (we assign the value Infinity to the element of the matrix (4,1)). Next we reduce the newly obtained matrix:

Calculating the lower bound for edge (2,4):

Now we select the edge with the maximum penalty (3,1).
Now let’s consider a subset of solutions without edge (3,1). The lower bound of this subset is calculated as the sum of the lower bound of the previous step (the step associated with edge (2,4)) and the edge penalty (3,1):
![]()
Note
-
In the penultimate step and the last step of determining the optimal edges of a Hamiltonian cycle, it is not necessary to search for the reverse edge
.
Now let’s consider a subset of all solutions with edge (3,1). Delete the third row from the matrix and the first column:
![]()
Calculating the lower bound for edge (3,1):

From (17) we select the last possible edge (4,3):

As a result, the edges for the Hamiltonian cycle were obtained: (1,2) (2,4) (3,1) (4,3) with a cost (distance) of 128.
Now let’s return to the branch of the tree (8) without the edge (1,2) and analyze it further. Let’s set the element of the matrix (1,2) to Infinity, and then reduce the matrix:

We calculate penalties for zero elements of the matrix:

Now let’s choose the edge (4,2) with the maximum penalty.
Now let’s consider a subset of solutions without edge (4,2). The lower bound of this subset is calculated as the sum of the lower bound without edge (1,2) and edge penalty (4,2):
![]()
Now let’s consider a subset of all solutions with edge (4,2). Delete the fourth row from the matrix, and also delete the second column. We also exclude a non-Hamiltonian contour (cycle) from the solution. At this step, it’s a loop: 4 => 2 => 4. Let’s set the reverse edge (2,4) to infinity (we assign the value Infinity to the element of the matrix (2,4)). Next we reduce the newly obtained matrix and calculate the penalties for zero elements:

Calculating the lower bound for edge (4,2):

Now let’s choose the edge (1,3) with the maximum penalty.
Now let’s consider a subset of solutions without edge (1,3). The lower bound of this subset is calculated as the sum of the lower bound of the edge (4,2) and the edge penalty (1,3):
![]()
Now let’s consider a subset of all solutions with edge (1,3). Delete the first row from the matrix, and also delete the third column. We also exclude a non-Hamiltonian cycle from the solution. At this step, it’s a loop: 1 => 3 => 1. Let’s set the reverse edge (3,1) to infinity (we assign the value Infinity to the element of the matrix (3,1)). Next we reduce the newly obtained matrix:

Calculating the lower bound for edge (1,3):

We calculate penalties for zero elements of the matrix:

Now let’s choose the edge (2,1) with the maximum penalty.
Now let’s consider a subset of solutions without edge (2,1). The lower bound of this subset is calculated as the sum of the lower bound of the edge (1,3) and the edge penalty (2,1):
![]()
Now let’s consider a subset of all solutions with edge (2,1). Delete the second row from the matrix, as well as delete the first column:

Note
-
In the penultimate step and the last step of determining the optimal edges of a Hamiltonian cycle, it is not necessary to search for the reverse edge
.
Calculating the lower bound for edge (2,1):

From (30) we select the last possible edge (3,4):

As a result we obtained two optimal sets of edges. See (19) and (32).
The first solution (19) with a set of optimal edges (1,2) (2,4) (3,1) (4,3) and with a minimum cost (distance) of 128. Let’s make optimal paths out of these edges:
(1,2) => (2,4) => (4,3) => (3,1) or 1 => 2 => 4 => 3 => 1
(2,4) => (4,3) => (3,1) => (1,2) or 2 => 4 => 3 => 1 => 2
(3,1) => (1,2) => (2,4) => (4,3) or 3 => 1 => 2 => 4 => 3
(4,3) => (3,1) => (1,2) => (2,4) or 4 => 3 => 1 => 2 => 4 (33)
The second solution (32) with a set of optimal edges (4,2) (1,3) (2,1) (3,4) and with a minimum cost (distance) of 128. Optimal paths:
(1,3) => (3,4) => (4,2) => (2,1) or 1 => 3 => 4 => 2 => 1
(2,1) => (1,3) => (3,4) => (4,2) or 2 => 1 => 3 => 4 => 2
(3,4) => (4,2) => (2,1) => (1,3) or 3 => 4 => 2 => 1 => 3
(4,2) => (2,1) => (1,3) => (3,4) or 4 => 2 => 1 => 3 => 4 (34)
Table 2-1 shows the optimal routes for problem (1), and Figure 2-2 shows the complete solution tree.
.

Table 2-1: Optimal Tracks for the Matrix (1)
.

Figure 2-2: Four Cities Tree
.
3. Notes and Conclusions
-
In the traveling salesman problem (1) a symmetric matrix was used in the previous section. The elements of this matrix are the distance between cities i and j. It is clear that the distance between cities i and j is equal to the distance between cities j and i. The result is a symmetric matrix. Another assignment of this matrix is possible. For example, we will set the fuel consumption required to get from the city i to j. In this case the fuel consumption for the trip from the city of j in i will be different. The road from the city of i in j, for example, goes uphill, and when traveling back from j in i, the road will go downhill and fuel consumption will be significantly lower. This will result in an unbalanced matrix. Next I will use the example of an asymmetric matrix for the traveling salesman problem from [1]:

Let’s rewrite/represent this matrix (35) in a more convenient form. We will write the column numbers above the table, and the row numbers to the left of the matrix:

Step 0. The lower bound of the cost (distance) of the route is estimated for the set of all Hamiltonian contours (cycles). This estimate is obtained by reducing the matrix. The minimum values in each row are calculated. These minimum values are recorded in an additional column h_i_min, temporarily added to the matrix:

Now we subtract these minimum values from the corresponding rows of the matrix.

Let’s sum up these minimum values along the lines:

Now let’s determine the minimum values of the columns of the newly obtained matrix. The row h_j_min is temporarily added to the matrix:

Now we will subtract these minimum values from the corresponding columns of the matrix, and then calculate the penalties for the zero elements of the matrix, which will be needed in the next step:

Let’s sum up these minimum column values:

The lower bound is calculated as the sum of the minimum values in rows and columns:

Now let’s move on to the next step. We need to choose a promising edge that can be part of the optimal solution. Such a solution is selected from the zero elements of the matrix (40) with the maximum penalty. We select an edge (4,1) with a penalty of 22:

Now let’s consider a subset of solutions without edge (4,1). The lower bound of this subset is calculated as the sum of the lower bound of the previous step (Step0) and the edge penalty (4,1):

Now let’s consider a subset of all solutions with edge (4,1). Delete the fourth row from the matrix to exclude the repeated selection of the outgoing city 4 in the next steps, and also delete the first column to exclude the incoming city 1. It is also important to exclude a non-Hamiltonian contour (cycle) from the solution. At this step, it’s a loop: 4 => 1 => 4. For this simple case we set the inverse edge (1,4) to infinity (we assign the value Infinity to the element of the matrix (1,4)). Next we reduce the newly obtained matrix:

Calculating the lower bound for edge (4,1):

Now we calculate the new penalties for the newly obtained reduced matrix:
We select the edge with the maximum penalty (1.6).
Now let’s consider a subset of solutions without edge (1,6). The lower bound of this subset is calculated as the sum of the lower bound of the previous step (the step associated with edge (4,1)) and the edge penalty (1,6):

Now let’s consider a subset of all solutions with edge (1,6). Delete the first row from the matrix, and also delete the sixth column. We also exclude a non-Hamiltonian contour (cycle) from the solution. In this step, it’s a loop: 4 => 1 => 6 => 4. Let’s set the reverse edge (6,4) to infinity (we assign the value Infinity to the element of the matrix (6,4)). Next we reduce the newly obtained matrix (the matrix remains unchanged):

Calculating the lower bound for edge (1,6):

We select the edge with the maximum penalty (6.3). Now let’s consider a subset of solutions without edge (6,3). The lower bound of this subset is calculated as the sum of the lower bound of the previous step (the step associated with edge (1,6)) and the edge penalty (6,3):

Now let’s consider a subset of all solutions with edge (6,3). Delete the sixth row from the matrix, and also delete the third column. We also exclude a non-Hamiltonian contour (cycle) from the solution. At this step, it’s a loop: 4 => 1 => 6 => 3 => 4. Let’s set the reverse edge (3,4) to infinity (we assign the value Infinity to the element of the matrix (3,4)). Next we reduce the newly obtained matrix:

Calculating the lower bound for edge (6,3):

Calculate the new penalties for the newly obtained reduced matrix:

We select the edge with the maximum penalty (3,2).
Now let’s consider a subset of solutions without edge (3,2). The lower bound of this subset is calculated as the sum of the lower bound of the previous step (the step associated with edge (6,3)) and the edge penalty (3,2):

Now let’s consider a subset of all solutions with edge (3,2). Delete the third row from the matrix, and also delete the second column. We also exclude a non-Hamiltonian contour (cycle) from the solution. At this step, it’s a loop: 4 => 1 => 6 => 3 => 2 => 4. Let’s set the reverse edge (2,4) to infinity (we assign the value Infinity to the element of the matrix (2,4)). Next we reduce the newly obtained matrix:

Calculating the lower bound for edge (3,2):

We select the edge with the maximum penalty (2.5).
Now let’s consider a subset of solutions without edge (2,5). The lower bound of this subset is calculated as the sum of the lower bound of the previous step (the step associated with edge (3,2)) and the edge penalty (2,5):
Now let’s consider a subset of all solutions with edge (2,5). Delete the second row from the matrix, and also delete the fifth column. Then only one element of the matrix remains.
Note
– In the penultimate step and the last step of determining the optimal edges of a Hamiltonian contour (cycle), it is not necessary to search for the reverse edge
.

Calculating the lower bound for the edge (2,5):

Now we select the last edge (5,4) from the matrix (59):

Now let’s return to the branch of the tree (44) without the edge (4,1) and analyze it further. Let’s set the element of the matrix (4,1) to Infinity, and then reduce the matrix:

We calculate penalties for zero elements of the matrix:

Now let’s choose the edge (6,1) with the maximum penalty.
Now let’s consider a subset of solutions without edge (6,1). The lower bound of this subset is calculated as the sum of the lower bound without edge (4,1) and edge penalty (6,1):

Now let’s consider a subset of all solutions with edge (6,1). Delete the sixth row from the matrix, as well as delete the first column. We also exclude a non-Hamiltonian contour (cycle) from the solution. At this step, it’s a loop: 6 => 1 => 6. Let’s set the reverse edge (1,6) to infinity (we assign the value Infinity to the element of the matrix (1,6)). Next we reduce the newly obtained matrix:

We calculate penalties for zero elements of the matrix:

Calculating the lower bound for edge (6,1):

The lower bound (67) is larger than the bound obtained earlier (61), so the branch of the tree (67) can no longer be considered.
Thus we obtain the following edges for the optimal solution: (4,1) (1,6) (6,3) (3,2) (2,5) (5,4) with minimal cost (minimal petrol) 102, see the (61).
Let’s build optimal paths using the resulting edges:
(1,6) (6,3) (3,2) (2,5) (5,4) (4,1): 1 => 6 => 3 => 2 => 5 => 4 => 1
(2,5) (5,4) (4,1) (1,6) (6,3) (3,2): 2 => 5 => 4 => 1 => 6 => 3 => 2
(3,2) (2,5) (5,4) (4,1) (1,6) (6,3): 3 => 2 => 5 => 4 => 1 => 6 => 3
(4,1) (1,6) (6,3) (3,2) (2,5) (5,4): 4 => 1 => 6 => 3 => 2 => 5 => 4
(5,4) (4,1) (1,6) (6,3) (3,2) (2,5): 5 => 4 => 1 => 6 => 3 => 2 => 5
(6,3) (3,2) (2,5) (5,4) (4,1) (1,6): 6 => 3 => 2 => 5 => 4 => 1 => 6 (68)
Table 3-1 shows the optimal routes for the problem (35), and Figure 3-1 shows the complete solution tree.
.
Table 3-1: Optimal Tracks for the Matrix (35)
.

Figure 3-1: Six Cities Tree
. -
A direct search of all the options in the traveling salesman problem gives the complexity of the algorithm:

It can be expected that the complexity of the solution using the branches and boundaries method will provide the best result for most of these tasks. However there is still a chance that in some cases this complexity will remain n!
.Notes
– The method of branches and boundaries is currently, apparently, the most effective way to obtain an accurate solution to the traveling salesman problem
– At each step, the nodes of the tree must be sorted by the lower bound of the cost in order to determine the promising branch of the tree
– When creating working arrays for the branches and boundaries method, it is impossible to say exactly how many nodes, and therefore memory, will be needed for the algorithm. This makes it difficult to reserve computer memory when implementing the method
.
-
For large n, it is recommended to use approximate methods for solving the traveling salesman problem
.
4. Octave GNU file TravellingSalesmanProblemBranchAndBound.m
The script solves the traveling salesman problem using the branches and boundaries method:
% Sort the Structure Array
% Input:
% inputStruct – input structure array
% Output parameters:
% sortedStruct – sorted structure array
function [sortedStruct] = SortStructureArray(inputStruct)
% Reduce Matrix
% Input:
% matrix – input matrix
% Output parameters:
% matrix – reduced matrix
% bound – bound value
function [matrix bound] = ReduceMatrix(matrix)
% Max Penalty of the Zero Element of the Matrix
% Input:
% matrix – input matrix
% Output parameters:
% row – row number
% column – column number
function [penalty row column] = MaxPenaltyMatrix(matrix)
% Search the Logical Position in Array
% Input:
% array – input array
% value – search value
% Output parameters:
% flag – true => value is found
% false => value is NOT found
% offset – offset in array
function [flag offset] = SearchLogicalPosition(array, value)
% Generate the tracks from optimal edges
% Input:
% edges – input cells
% Output parameters:
% tracks – output cells
function [tracks] = EdgesToTracks(edges)
% Search the back path/edge
% Input:
% edges – input edges array
% row_real – row
% column_real – column
% cities_nmb – cities number
% Output parameters:
% row_real_back – back row
% column_real_back – back column
function [row_real_back column_real_back] = SearchBackPath(edges, cities_nmb);
.
5. Download the TravellingSalesmanProblemBranchAndBound.m
You can download the files:
TravellingSalesmanProblemBranchAndBound.m
with the button:
.
6. Literature / References
[1] V. I. Mudrov “The Traveling salesman problem“, Publishing House “Znanie“, Moscow, 1969
.