Guide to Competitive Programming

The basic problem in sorting is as follows: Given an array that contains n elements, sort the elements in increasing order. For example, shows an array before and after sorting.

In this section we will go through some fundamental sorting algorithms and examine their properties. It is easy to design an O(n2) time sorting algorithm, but there are also more efficient algorithms. After discussing the theory of sorting, we will focus on using sorting in practice in C++.

Sorting Algorithms

Bubble Sort:

Bubble sort is a simple sorting algorithm that works in O(n2) time. The algorithm consists of n rounds, and on each round, it iterates through the elements of the array.

Whenever two consecutive elements are found that are in wrong order, the algorithm swaps them. The algorithm can be implemented as follows:

for (int i = 0; i < n; i++) {
for (int j = 0; j < n-1; j++) {
if (array[j] > array[j+1]) {
swap(array[j],array[j+1]);
}
}
}

After the first round of bubble sort, the largest element will be in the correct position, and more generally, after k rounds, the k largest elements will be in the correct positions. Thus, after n rounds, the whole array will be sorted.

For example, shows the first round of swaps when bubble sort is used to sort an array.

Bubble sort is an example of a sorting algorithm that always swaps consecutive elements in the array. It turns out that the time complexity of such an algorithm is always at least O(n2), because in the worst case, O(n2) swaps are required for sorting the array.

Bubble sort

Inversions A useful concept when analyzing sorting algorithms is an inversion: a pair of array indices (a, b) such that a < b and array[a] >array[b], i.e., the elements are in wrong order. For example, the array in image has three inversions: (3, 4), (3, 5), and (6, 7).

The number of inversions indicates how much work is needed to sort the array.
An array is completely sorted when there are no inversions. On the other hand, if the array elements are in the reverse order, the number of inversions is 1 + 2+· · ·+(n − 1) = n(n − 1) 2 = O(n2), which is the largest possible.

Swapping a pair of consecutive elements that are in the wrong order removes exactly one inversion from the array.

Hence, if a sorting algorithm can only swap consecutive elements, each swap removes at most one inversion, and the time complexity of the algorithm is at least O(n2).

Merge Sort:

If we want to create an efficient sorting algorithm, we have to be able to reorder elements that are in different parts of the array. There are several such sorting algorithms that work in O(n log n) time. One of them is merge sort, which is based on recursion. Merge sort sorts a subarray array[a . . . b] as follows:

1. If a = b, do not do anything, because a subarray that only contains one element
is already sorted.
2. Calculate the position of the middle element: k = (a + b)/2.
3. Recursively sort the subarray array[a . . . k].
4. Recursively sort the subarray array[k + 1 . . . b].
5. Merge the sorted subarrays array[a . . . k] and array[k + 1 . . . b] into a sorted
subarray array[a . . . b].

For example,  shows howmerge sort sorts an array of eight elements. First, the algorithm divides the array into two subarrays of four elements. Then, it sorts these subarrays recursively by calling itself. Finally, it merges the sorted subarrays into a sorted array of eight elements.

Merge sort is an efficient algorithm, because it halves the size of the subarray at each step. Then, merging the sorted subarrays is possible in linear time, because they are already sorted. Since there are O(log n) recursive levels, and processing each level takes a total of O(n) time, the algorithm works in O(n log n) time.

Merge Sort

Sorting Lower Bound:

Is it possible to sort an array faster than in O(n log n) time? It turns out that this is not possible when we restrict ourselves to sorting algorithms that are based on comparing array elements.

The lower bound for the time complexity can be proved by considering sorting asa process where each comparison of two elements gives more information about the contents of the array. Figure 4.5 illustrates the tree created in this process.

Here “x < y?” means that some elements x and y are compared. If x < y, the process continues to the left, and otherwise to the right. The results of the process are the possible ways to sort the array, a total of n! ways. For this reason, the height of the tree must be at least

log2(n!) = log2(1) + log2(2)+· · ·+log2(n).

We get a lower bound for this sum by choosing the last n/2 elements and changing the value of each element to log2(n/2). This yields an estimate

log2(n!) ≥ (n/2) · log2(n/2),

so the height of the tree and the worst-case number of steps in a sorting algorithm is Ω(n log n).

Sorting Lower Bound

Counting Sort:

The lower bound Ω(n log n) does not apply to algorithms that do not compare array elements but use some other information. An example of such an algorithm is counting sort that sorts an array in O(n) time assuming that every element in the array is an integer between 0 . . . c and c = O(n).

The algorithm creates a bookkeeping array, whose indices are elements of the original array. The algorithm iterates through the original array and calculates how many times each element appears in the array. As an example, Fig. 4.6 shows an array and the corresponding bookkeeping array. For example, the value at position 3 is 2, because the value 3 appears 2 times in the original array.

The construction of the bookkeeping array takes O(n) time. After this, the sorted array can be created in O(n) time, because the number of occurrences of each element can be retrieved from the bookkeeping array. Thus, the total time complexity of counting sort is O(n).

Counting sort is a very efficient algorithm but it can only be used when the constant c is small enough, so that the array elements can be used as indices in the bookkeeping array.

Sorting in Practice:

In practice, it is almost never a good idea to implement a home-made sorting algorithm, because all modern programming languages have good sorting algorithms in their standard libraries. There are many reasons to use a library function: it is certainly correct and efficient, and also easy to use.

In C++, the function sort efficiently1 sorts the contents of a data structure. For example, the following code sorts the elements of a vector in increasing order:

vector<int> v = {4,2,5,3,5,8,3};
sort(v.begin(),v.end());

After the sorting, the contents of the vector will be [2, 3, 3, 4, 5, 5, 8]. The default sorting order is increasing, but a reverse order is possible as follows:

The C++11 standard requires that the sort function works in O(n log n) time; the exact implementation depends on the compiler.

sort(v.rbegin(),v.rend());

An ordinary array can be sorted as follows:

int n = 7; // array size
int a[] = {4,2,5,3,5,8,3};
sort(a,a+n);

Then, the following code sorts the string s:

string s = “monkey”;
sort(s.begin(), s.end());

Sorting a string means that the characters of the string are sorted. For example, the string “monkey” becomes “ekmnoy”.
Comparison Operators The sort function requires that a comparison operator is defined for the data type of the elements to be sorted. When sorting, this operator will be used whenever it is necessary to find out the order of two elements.

Most C++ data types have a built-in comparison operator, and elements of those types can be sorted automatically. Numbers are sorted according to their values, and strings are sorted in alphabetical order. Pairs are sorted primarily according to their first elements and secondarily according to their second elements:

vector<pair<int,int>> v;
v.push_back({1,5});
v.push_back({2,3});
v.push_back({1,2});
sort(v.begin(), v.end());
// result: [(1,2),(1,5),(2,3)]

In a similar way, tuples are sorted primarily by the first element, secondarily by the second element, etc.2:

vector<tuple<int,int,int>> v;
v.push_back({2,1,4});
v.push_back({1,5,3});
v.push_back({2,1,3});
sort(v.begin(), v.end());
// result: [(1,5,3),(2,1,3),(2,1,4)]

User-defined structs do not have a comparison operator automatically. The operator should be defined inside the struct as a function operator<, whose parameter

Note that in some older compilers, the function make_tuple has to be used to create a tuple instead of braces (for example, make_tuple(2,1,4) instead of {2,1,4}).

is another element of the same type. The operator should return true if the element is smaller than the parameter, and false otherwise.

For example, the following struct point contains the x and y coordinates of a point. The comparison operator is defined so that the points are sorted primarily by the x coordinate and secondarily by the y coordinate.

struct point {
int x, y;
bool operator<(const point &p) {
if (x == p.x) return y < p.y;
else return x < p.x;
}
};

Comparison Functions It is also possible to give an external comparison function to the sort function as a callback function. For example, the following comparison function comp sorts strings primarily by length and secondarily by alphabetical order:

bool comp(string a, string b) {
if (a.size() == b.size()) return a < b;
else return a.size() < b.size();
}

Now a vector of strings can be sorted as follows:

sort(v.begin(), v.end(), comp);