Overview of Sorts with C Source Code
1. Abbreviation
A median filter is a nonlinear filter used to discard measurements that deviate significantly from previous results. This filter is effective for suppressing interference. The algorithm of this filter contains data sorting. See the article “Median Filter and C Code Realization”
2. Introduction
A common task in programming is to arrange some entries in memory in ascending or descending order. This task is called data sorting. In the classical formulation of the problem, there are entries in computer memory, for example, “Apple”, “Pear”, “Plum” and etc., which correspond to some keys, i.e. numbers. The task is to arrange these records in ascending or descending order of their keys. I will simplify this task and assume that there is an array of numbers that need to be ordered, and I will always order them in ascending order. To help myself, I will take, in my opinion, the best monograph on programming [1] – Donald E. Knuth “The Art of Computer Programming“. It contains about 25 descriptions of various sorts. I’ll look at a few of them here. For each of the algorithms I consider, the source code of programs in the C programming language will be attached, and you can try them yourself on your computer and choose the “best”one for yourself.
For example, I will need efficient sorting in the implementation of the median filter. This filter plays an important role in digital signal processing to suppress interference.
What is good or bad sorting? Unfortunately, there is no clear answer here. This depends on the size of the data, the degree of ordering of the array, the features of the machine system commands (assembler), and the speed of the computer.
So, let’s move on to the most interesting sorts from my point of view.
3. Bubble Method
This method has not practical interest. Moreover, if someone uses this method, it indicates a lack of experience in programming. On the other hand, most books start with a description of this inefficient algorithm. I will also not be original and will focus on this method, but only so that you remember that you should never use this sorting anywhere. To my shame, I also once used this algorithm in my program. It ended badly. My code was running unacceptably slow and my task crashed the operating system, after which I had to urgently change the used algorithm.
Let there be an array of N numbers: A[0], A[1], A[2],…, A[j], A[j+1],…, A[N1]. The program runs through the array and compares two neighboring elements, and if they are located incorrectly, then swaps them. By repeating these passes through the array several times, we can sort the data.
The passage can be described as: i=0…N2
A[j] > A[j+1] => A[0],…, A[j+1], A[j],…, A[N1]
You can notice several properties of one such pass:

The maximum element POPs up to the top of the array in one pass. In this case, the next pass can be restricted to 1t pass: j=0…N2; 2d pass: j=0…N3; 3d pass: j=0…N4; …

If the array is already sorted, no pair of numbers is swapped when passing through the data. This fact can serve as an indicator of array sorting.
As an example, consider an array that is completely disordered: 4, 3, 2, 1, that is, the data is placed in reverse order. In fact, in real conditions, we work with data whose disorder is unpredictable. There is a probability (small probability) that the array is already ordered, and vice versa, that the data is in the opposite order.
Note in Figure 31 that the algorithm for full sorting requires a maximum of N1 passes, and in one pass, the largest unsorted element POPs up like a bubble at the top of the array. This effect led to the fact that the method is bubble called.
Figure 31: Bubble Sort for Array: 4,3,2,1 => 1,2,3,4
A few comments should be made:

If the array requires fewer passes than N1, then the indicator of the end of the process will be the lack of data exchange within the array during the next pass. You can use this to stop the sorting process in time.

After each pass, the size of the array for further sorting is reduced by one, but there may be a situation when the maximum element is already at the top and the size of the array for the next pass may be reduced by more than one. This can be determined by storing the index of the last data exchange inside the array.
Using the last comments, you can speed up the program execution in cases where the input data is not placed in the opposite order.
Knuth in [1] gives an estimate of the execution time of this algorithm for his computer “MIX“:

The minimum sorting time for the case when the array is already sorted is proportional to → (8N + 1)

The average sorting time for the case of average array entanglement is proportional to →

The maximum time for the case when the data is arranged in reverse order is proportional to →
If we evaluate this algorithm very roughly, then the average program execution time is proportional to →
This is an extremely bad result.
C Code:
/* Invalid Index */
#define INVALID_INDEX 0xFFFFu
/*
Bubble Sort Method
Input
sint16* ptrArray – pointer on the working array
uint16 arraySize – array length
Return
void — nothing
*/
void BubbleSortMethod(sint16* ptrArray, uint16 arraySize)
{
uint16 lastChangeIndex=arraySize1;
uint16 currentChangeIndex;
uint16 j_index;
sint16 help;
do
{
currentChangeIndex = INVALID_INDEX;
for(j_index=0; j_index < lastChangeIndex; j_index++)
{
if( ptrArray[j_index] > ptrArray[j_index+1])
{/* Change the position */
help = ptrArray[j_index];
ptrArray[j_index] = ptrArray[j_index+1];
ptrArray[j_index+1] = help;
currentChangeIndex = j_index;
}
}
lastChangeIndex = currentChangeIndex;
} while(INVALID_INDEX != lastChangeIndex);
}
Conclusions:

Never use this algorithm in your applications!

There are some modifications to this algorithm, such as Shaker sort. But these modifications also do not lead to good results
4. Insertion Sort Method
The next method is sorting by inserts. In the first step, we assume that the first element of the array A[0] is already sorted. Next, we take A[1] and sort it together with A[0]. If A[0] > A[1], then swap them and our array elements A[0] and A[1] are sorted again. Now let’s take A[3] and find a place for it between the elements of the array A[0] and A[1], and so on. Let the first ielements of the array B[0],…, B[i] have already been sorted, then take the following number A[i+1]:

if (A[i+1] > B[i]) then we get again an intermediate sorted array B[0],…, B[i], A[i+1] and can go to the next element A[i+2]

if (B[i] > A[i+1]) then we rewrite the number B[i] to the position i+1, and then compare B[i1] and A[i+1]. So until we find a suitable place for A[i+1]. Then go to the element A[i+2]
It is estimated [1] that this method is at least twice as good as the previous bubble method. The main drawback of the method is to rewrite the data in memory to free up space for inserting the next sorted element. However, this method is often used as part of other sorting methods.
C Code:
/*
Insertion Sort Method
Input
sint16* ptrArray – pointer on the working array
uint16 arraySize – array length
Return
void — nothing
*/
void InsertionSortMethod(sint16* ptrArray, uint16 arraySize)
{
uint8 exit_flag;
sint16 help_value;
uint16 i_index;
uint16 j_index = 1u;
do
{/* Main Loop: j = 1,…N1 */
i_index = j_index – 1;
help_value = ptrArray[j_index];
exit_flag = FALSE;
do
{/* Loop: i= j1, j2,…, 0 */
if(help_value >= ptrArray[i_index])
{/* Insert the A[j] in the suitable position */
ptrArray[i_index+1] = help_value;
exit_flag = TRUE;
}
else
{/* Shift the A[i] to A[i+1] */
ptrArray[i_index+1] = ptrArray[i_index];
if(i_index == 0)
{/* First array element */
ptrArray[i_index] = help_value;
exit_flag = TRUE;
}
else
{/* i = j1,…, 0 */
i_index–;
}
}
} while(FALSE == exit_flag);
j_index++;
} while(j_index < arraySize);
}
4.1 Insertion Sort Method using list array
One of the disadvantages of the insert method is the need to rewrite the data in the array. If you define another special array in which the list will be defined in ascending order of elements, then there is no need to rewrite the elements of the original array, i.e. the original array will be not changed at all.
For example, there is an array of data:
sint16 var_array[4] = {4,2,3,1};
Let’s define another list array that is one larger than the original data array:
uint16 list_array[4+1];
We reserve the value 0xFFFF = 65535 as the value of the end of the list. The last item will be the beginning of the list. Then the sorting program should build the following list:
list_array[4+1]={0xFFFF, 2, 0, 1, 3};
In fact, starting from the last element and continuing through the list:
list_array[4]=3 => var_array[3] = 1
list_array[3]=1 => var_array[1] = 2
list_array[1]=2 => var_array[2] = 3
list_array[2]=0 => var_array[0] = 4
list_array[0]=0xFFFF => stop list
Algorithm description:

list_array[N]=N1; and list_array[N1]=0xFFFF;
next, the cycle for j = N2, N3,…, 0 
p_position = list_array[N];
q_position = N;
VALUE = var_array[ j ]; 
IF VALUE < list_array[p_position] THEN list_array[q_position] = j; list_array[j] = p_position;

IF VALUE >= list_array[p_position] THEN move the p_position and q_position:
q_position = p_position; p_position = list_array[q_position];
Then go to the previous step
C Code:
/*
Insertion Sort in the List
Input
sint16* ptrArray – pointer on the working array with the arraySize
uint16* ptrLink – pointer on the list with the arraySize+1
uint16 arraySize – array length
Return
void – nothing
*/
#define LIST_END_VALUE 0xFFFFu /* Invalid Index */
void InsertionLinkSortMethod(sint16* ptrArray, uint16* ptrLink, uint16 arraySize)
{
sint16 j_index;
uint16 p_position;
uint16 q_position;
sint16 help_value;
uint8 exit_flag;
ptrLink[arraySize1]=LIST_END_VALUE;
ptrLink[arraySize]=arraySize1;
j_index = arraySize2;
do
{/* Main Loop: j = N2, N3,…, 0 */
help_value = ptrArray[j_index];
p_position = ptrLink[arraySize];
q_position = arraySize;
exit_flag = FALSE;
do
{/* Search the list position */
if(help_value < ptrArray[p_position])
{/* Suitable position in the list */
exit_flag = TRUE;
}
else
{/* Take the next element */
q_position = p_position;
p_position = ptrLink[q_position];
if(LIST_END_VALUE == p_position)
{/* Last element of the list */
exit_flag = TRUE;
}
}
if(FALSE != exit_flag)
{/* Insert the j element in the list */
ptrLink[q_position] = j_index;
ptrLink[j_index] = p_position;
}
} while (FALSE == exit_flag);
j_index –;
} while(j_index >= 0);
}
5. Shell Sort
Developed by Donald L. Shell. This method moves records in large leaps. In previous methods, data was overwritten by only one position, which made the sorting process heavier and slower. Let there be an array of data: A[0], A[1], A[2],…, A[j], A[j+1],…, A[N1]. Let’s choose a sequence of numbers: h[0]=1, h[1], h[2],…, h[t1].
Now we will sort the following subarrays:
A[0], A[0+h[t1]], A[0+2h[t1]],…, A[0+kh[t1]], A[0+(k+1)h[t1]],…
A[1], A[1+h[t1]], A[1+2h[t1]],…, A[1+kh[t1]], A[1+(k+1)h[t1]],…
…
A[m], A[m+h[t1]], A[m+2h[t1]],…, A[m+kh[t1]], A[m+(k+1)h[t1]],…
…
Then take the next offset h[t2] and sort the resulting subarrays:
A[0], A[0+h[t2]], A[0+2h[t2]],…, A[0+kh[t2]], A[0+(k+1)h[t2]],…
A[1], A[1+h[t2]], A[1+2h[t2]],…, A[1+kh[t2]], A[1+(k+1)h[t2]],…
…
A[m], A[m+h[t2]], A[m+2h[t2]],…, A[m+kh[t2]], A[m+(k+1)h[t2]],…
So the last step is to use h[0] = 1 and one subarray for final sorting:
A[0], A[0+1], A[0+2],…, A[0+k], A[0+(k+1)],…, A[0+N1]
At the same time, we will sort all these subarrays using the insertion method. See the previous paragraph. Overwrites with large jumps are achieved using h[s] offsets. Selecting these offsets is an important task of the Shellsort method. Here are some good sequences for this:

Pratt Sequence 2p x 3q, A003586: 1, 2, 3, 4, 6, 8, 9, 12, 16, 18, 24, 27, 32, 36, 48, 54, 64, 72, 81, 96, 108, 128, 144, 162, 192, 216, 243, 256, 288, 324, 384, 432, 486, 512, 576, 648, 729, 768, 864, 972, 1024, 1152, 1296, 1458, 1536, 1728, 1944, 2048, 2187, 2304, 2592, 2916, 3072, 3456, 3888, …
The execution time of the algorithm with this sequence will be proportional to:

Sedgewick Sequence A033622: 1, 5, 19, 41, 109, 209, 505, 929, 2161, 3905, 8929, 16001, 36289, 64769, 146305, 260609, 587521, …
The execution time of the algorithm with this sequence will be proportional to:

Ciura Sequence A102549: 1, 4, 10, 23, 57, 132, 301, 701, 1750

Knuth Sequence h[s]=3h[s1]+1: 1, 4, 13, 40, 121, 364,…

and so on
Notes:

Obviously, the maximum offset value must be h_max < N. This value is often taken h_max ≤ N/3

Demo SW uses Pratt Sequence. I tested the program with an array of 37 elements. Therefore h_max=12 ≤ N/3=37/3≈12
C Code:
/*
Shellsort Method
Input
sint16* ptrArray – pointer on the working array
uint16 arraySize – array length
Return
void – nothing
*/
const uint16 gap_A003586[8] = {1, 2, 3, 4, 6, 8, 9, 12}; /* Pratt sequence */
void ShellSortMethod(sint16* ptrArray, uint16 arraySize, uint16 gapSize)
{
uint8 exit_flag;
sint16 help_value;
uint16 i_index;
uint16 j_index;
uint16 gap_index = gapSize;
uint16 current_gap;
do
{
current_gap = gap_A003586[gap_index1];
j_index = current_gap;
do
{/* Main Loop: j = gap,…N1 */
i_index = j_index – current_gap;
help_value = ptrArray[j_index];
exit_flag = FALSE;
do
{/* Loop: i= jgap, j2gap,…, 0 */
if(help_value >= ptrArray[i_index])
{/* Insert the A[j] in the suitable position */
ptrArray[i_index+current_gap] = help_value;
exit_flag = TRUE;
}
else
{/* Shift the A[i] to A[i+gap] */
ptrArray[i_index+current_gap] = ptrArray[i_index];
if(i_index < current_gap)
{/* First array element */
ptrArray[i_index] = help_value;
exit_flag = TRUE;
}
else
{/* i = jgap,…, 0 */
i_index=current_gap;
}
}
} while(FALSE == exit_flag);
j_index++;
} while(j_index < arraySize);
gap_index = 1;
} while(0 != gap_index);
}
6. Quick Sort
This method was developed by C. A. R. Hoare. Here the reference element is selected:
A[0], A[1], A[2],…, A[j], A[j+1],…, A[N1], for example: A[0]. Further, this support member is moved to the desired location in the array while the left of it contain values less than the reference, and the right – more. Now the reference element splits the array into two subarrays that can be applied with the same algorithm. So until the data is completely sorted.
To illustrate the algorithm, take the following array of ten elements A[0],…, A[9]:

Take RefValue=A[0] = 5 as the reference element, and enter two indexes i=1 (first element after reference) and j=9 (last position)

Now we will compare RefValue and A[i] while increasing i++ until
RefValue > A[i]
As soon as this condition is not met, then we stop increasing i and proceed to the next step. 
Compare RefValue and A[j] at the same time, we will decrease j– until
RefValue < A[j]
As soon as this condition is not met, then we stop reducing j and proceed to the next step. 
If i < j then A[i] <=> A[j]. In this case, the two elements are reversed, and then go back to step 2.
If i >= j then RefValue <=> A[j]. In this case, the reference element and the jelement are swapped and we proceed to the next step. 
Now we get two subarrays. The smaller subarray is stored in special memory, which we will call the stack, and the larger subarray is processed again, starting from step 2.This algorithm is repeated until the entire array is sorted.
Recursive procedures are usually used to implement this algorithm. On the other hand, embedded (microprocessor) systems are reluctant to use recursion, because it is generally considered that this is not enough functional safety. I believe that it is possible to implement recursive programs accurately with strict stack control for embedded SW. However, to implement this algorithm in the “C“ language, I used the implementation from [1] without recursion. Knuth added several additional improvements to this algorithm:

C. A. R. Hoare proved that in some cases the algorithm does not work well if we take the first element of the array/subarray as the reference element. There are several methods to solve this problem. Here is one of them. The first, middle, and last element of the (sub)array is taken. From these three numbers, the median value is selected, which is taken as the reference value.

For small subarrays of size “M“ or smaller, insert sorting method is used, which reduces the program execution time. Knuth in [1] proved that for his computer “MIX“ the optimal M=9. I use the same value in my program.

The execution time of the algorithm is proportional to
But in some cases this time may increase to 
An important parameter for quick sorting is the choice of the auxiliary stack memory size. Knuth in [1] derived the formula for this:
For example: an array of size N=1000 elements and M=9 must have a stack of size
In this case, we must simultaneously remember the information up to 7 subarrays for the processing later.
Note: M value in Ccode has name MIN_QUICK_SORT_SIZE and stackSize value has name: INDEX_QUICK_STACK_SIZE
C Code:
/*
Quick Sort Method and using Insertion Method too
Input
sint16* ptrArray – pointer on the working array
uint16 arraySize – array length
Return
void – nothing
*/
void QuickSortMethod(sint16* ptrArray, uint16 arraySize)
{
uint16 i_index;
uint16 j_index;
uint16 begin_current;
uint16 end_current;
sint16 help_value;
sint16 help_temp;
uint16 beginArraySize;
uint16 endArraySize;
uint8 exit_flag;
/* Stack Pointer Init. */
helpQuickIndex =INDEX_QUICK_STACK_SIZE – 1u;
/* Index Initialization */
i_index = 1;
j_index = arraySize1;
begin_current = 0;
end_current = arraySize1;
/* Reference Key */
helpQuickMedian(ptrArray, begin_current, end_current);
help_value = ptrArray[begin_current];
/* Exit Flag inactive */
exit_flag == FALSE;
do
{/* Subarray Processing */
do
{/* i_index */
if(help_value > ptrArray[i_index])
{
i_index += 1;
if(arraySize <= i_index)
{/* i_index is overflow */
i_index = arraySize – 1;
exit_flag = TRUE;
}
}
else
{
exit_flag = TRUE;
}
} while(FALSE == exit_flag);
/* Exit Flag inactive */
exit_flag = FALSE;
do
{/* j_index */
if(help_value < ptrArray[j_index])
{
j_index = 1;
}
else
{
exit_flag = TRUE;
}
} while(FALSE == exit_flag);
exit_flag = FALSE;
if(j_index > i_index)
{/* Change the ptrArray[i_index] <=> ptrArray[j_index] */
help_temp = ptrArray[i_index];
ptrArray[i_index] = ptrArray[j_index];
ptrArray[j_index] = help_temp;
i_index += 1;
if(arraySize <= i_index){i_index = arraySize – 1;}
j_index = 1;
}
else
{/* Subarray is ready */
/* Change the ptrArray[begin_current] <=> ptrArray[j_index] */
help_temp = ptrArray[begin_current];
ptrArray[begin_current] = ptrArray[j_index];
ptrArray[j_index] = help_temp;
/* Subarrays Analysis */
beginArraySize = j_index – begin_current;
endArraySize = end_current – j_index;
if((beginArraySize > MIN_QUICK_SORT_SIZE) &&
(endArraySize > MIN_QUICK_SORT_SIZE))
{
if(endArraySize >= beginArraySize)
{
/* Save the subarray on the stack for the future processing */
helpQuickStack[helpQuickIndex].low_index = j_index + 1;
helpQuickStack[helpQuickIndex].high_index = end_current;
helpQuickIndex = 1;
/* Initialize the next subarray */
end_current = j_index – 1;
}
else
{
/* Save the subarray on the stack for the future processing */
helpQuickStack[helpQuickIndex].low_index = begin_current;
helpQuickStack[helpQuickIndex].high_index = j_index – 1;
helpQuickIndex = 1;
/* Initialize the next subarray */
begin_current = j_index + 1;
}
/* Initiliaze the index */
i_index = begin_current + 1;
j_index = end_current;
/* Reference Key */
helpQuickMedian(ptrArray, begin_current, end_current);
help_value = ptrArray[begin_current];
}
if((beginArraySize > MIN_QUICK_SORT_SIZE) &&
(endArraySize <= MIN_QUICK_SORT_SIZE))
{
/* Initialize the next subarray */
end_current = j_index – 1;
/* Initiliaze the index */
i_index = begin_current + 1;
j_index = end_current;
/* Reference Key */
helpQuickMedian(ptrArray, begin_current, end_current);
help_value = ptrArray[begin_current];
}
if((beginArraySize <= MIN_QUICK_SORT_SIZE) &&
(endArraySize > MIN_QUICK_SORT_SIZE))
{
/* Initialize the next subarray */
begin_current = j_index + 1;
/* Initiliaze the index */
i_index = begin_current + 1;
j_index = end_current;
/* Reference Key */
helpQuickMedian(ptrArray, begin_current, end_current);
help_value = ptrArray[begin_current];
}
if((beginArraySize <= MIN_QUICK_SORT_SIZE) &&
(endArraySize <= MIN_QUICK_SORT_SIZE))
{
/* Check the stack */
if((INDEX_QUICK_STACK_SIZE1u) != helpQuickIndex)
{/* Read from stack */
helpQuickIndex += 1;
begin_current = helpQuickStack[helpQuickIndex].low_index;
end_current = helpQuickStack[helpQuickIndex].high_index;
/* Initiliaze the index */
i_index = begin_current + 1;
j_index = end_current;
/* Reference Key */
helpQuickMedian(ptrArray, begin_current, end_current);
help_value = ptrArray[begin_current];
}
else
{/* Stack is empty */
/* Sort is ready */
exit_flag = TRUE;
}
}
}
} while (FALSE == exit_flag);
helpQuickInsertionPart(ptrArray, arraySize);
}
Notes:

helpQuickMedian() – the function calculates the new reference element as median value

helpQuickInsertionPart() – the function sortes with the Insertion Method all subarrays with size ≤ M
7. Heap Sort
This method builds a binary tree inside the array in such a way that the maximum value POPs up at the top of the tree. See figure 71 for an array with 15 elements: A[1], A[2],…, A[15]. In the circles are the indexes of the array, i.e.
A[1] has two children A[2], A[3]
A[2] has two children A[4], A[5]
A[3] has two children A[6], A[7]
…
A[k] has two children A[2k], A[2k+1]
The algorithm works to arrange the data: A[k] ≥ A[2k] and A[k] ≥ A[2k+1]. Obviously, the maximum element will occupy the top of the tree, and then the maximum element will be moved to the end of the array. Next, we rebuild this binary tree again so that the next maximum value of the remaining elements is again at the top. This value will be moved to the end of the array to the penultimate place, and so on until the array is completely sorted.
Figure 71: Heap Sort: Binary Heap / Tree
Algorithm description. There is an array A[0], A[1],…, A[arraySize1] to sort:

Initialization:
begin_current = arraySize/2;
end_current = arraySize;
help_value = A[begin_current1]; 
i_index = begin_current;
j_index = 2*i_index; 
if j_index < end_current then compare two children: 2i=j and 2i+1=j+1 => If A[j_index]>A[j_index1] then j_index = j_index+1.
Choose the maximal child index and GO TO the next stepif j_index > end_current then GO TO step 6
if j_index == end_current then GO TO the next step

if help_value < A[ j_index1] then GO TO the next step
if help_value ≥ A[ j_index1] then GO TO the step 6

A[i_index1] = A[j_index1];
i_index = j_index;
j_index = 2*i_index;
GO TO step 3 
A[i_index1] = help_value;

If begin_current > 1
then begin_current = begin_current – 1;
help_value = A[begin_current1];
GO TO step 2
If begin_current == 1
then help_value = A[end_current1];
A[end_current1] = A[0];
end_current = end_current – 1;
if end_current > 1
then GO TO step 2
if end_current == 1
then A[0] = help_value;
Stop: the sort algorithm is ready!
Notes:

The algorithm uses the property that there are no k such that : (N/2)+1 < k/2 < k < N. This is why the algorithm starts with the values: begin_current=N/2 and end_current=N

C Language uses indexing in the array 0…N1. The Algorithm uses indexing 1…N. For this reason, the algorithm appears index1 when accessing data, for example, A[i_index1]. Here you can optimize the proposed algorithm

The algorithm execution time is proportional to
On the other hand, in [1] it is proved that this method is inferior to the Insertion Method and Shell Method for small arrays and Quick Sort for large arrays. However, Quick Sort can change his efficiency from
to
while Heap Sort always remains
C Code:
/*
Heap Sort Method
Input
sint16* ptrArray – pointer on the working array
uint16 arraySize – array length
Return
void – nothing
*/
void HeapSortMethod(sint16* ptrArray, uint16 arraySize)
{
uint16 i_index;
uint16 j_index;
uint16 begin_current = arraySize/2;
uint16 end_current = arraySize;
sint16 help_value = ptrArray[begin_current1];
uint8 exit_flag = FALSE; /* main loop */
uint8 exit_flag1 = FALSE; /* internal loop */
do
{/* Main Loop */
exit_flag1 = FALSE;
i_index = begin_current;
j_index = 2*i_index;
do
{/* Internal Loop */
if(j_index < end_current)
{/* Search the maximal child */
if(ptrArray[j_index] > ptrArray[j_index1])
{/* The 2i+1 child is maximal */
j_index += 1;
}
}
else
{
if(j_index > end_current)
{/* Exit from internal loop */
exit_flag1 = TRUE;
}
}
if(FALSE == exit_flag1)
{
if(help_value < ptrArray[j_index1])
{/* Dragging for the index: k, 2k, 4k, … */
ptrArray[i_index1] = ptrArray[j_index1];
i_index = j_index;
j_index = 2*i_index;
}
else
{/* Exit from internal loop */
exit_flag1 = TRUE;
}
}
} while(FALSE == exit_flag1);
ptrArray[i_index1] = help_value;
if(1 < begin_current)
{/* Prepare for the next dragging */
begin_current = 1;
help_value = ptrArray[begin_current1];
}
else
{/* Heap value in the end of the array */
help_value = ptrArray[end_current1];
ptrArray[end_current1] = ptrArray[0];
end_current = 1;
if(1 == end_current)
{/* End of the algorithm and exit from main loop */
ptrArray[0] = help_value;
exit_flag = TRUE;
}
}
} while(FALSE == exit_flag);
}
8. Merge Sort
I want to mention about a merge sort. This sorting is based on the principle of merging two (or several) already sorted arrays. For example, there are two sorted arrays:
A[4] = 1 3 5 7
B[4] = 2 6 8
Then, to combine arrays, the smallest element of these two arrays is selected first, which is sent to the output buffer, then the next one, and so on:
Notes:

This method requires additional memory for the output buffer, i.e. double memory, which is somewhat wasteful

Knuth [1] estimates that the method is inferior in speed to quick sort
Therefore, I do not provide the source code of the sort method here.
9. Resume
As I noted above, there is no better sorting algorithm. My suggestion is to take various sorting programs from the site, try it out and determine for yourself what is best for your application. Measure the time required for sorting for arrays of different entanglements, including a variant of an already ordered array.
File:
sorts.c/h
Functions:
void BubbleSortMethod(sint16*, uint16); – Bubble Sort
void InsertionSortMethod(sint16*, uint16); – Insertion Sort
void InsertionLinkSortMethod(sint16*, uint16*, uint16); – Insertion Sort using Link
void ShellSortMethod(sint16*, uint16, uint16); – Shell Sort
void QuickSortMethod(sint16*, uint16); – Quick Sort
void HeapSortMethod(sint16*, uint16); – Heap Sort
Notes:

SW does not use recursive calls, because so far in embedded space tries to avoid these methods.

Try difference sequences for the Shellsort and choose the correct length of the sequence.
10. Literature / References
[1] Donald E. Knuth “The Art of Computer Programming“, Volume 3, ISBN 0201038013, AddisonWesley, 1968