## Table of Contents

## Review

Name |
QuickSort |

Best case |
Ο(n log n) |

Average case |
Ο(n log n) |

Worst case |
Ο(n^{2}) |

Stable |
NO |

In place |
YES |

Adaptive |
NO |

Difficulty |
Medium |

Method |
Divide-And-Conquer |

## Quicksort, simple but effective

Quicksort, just like Merge sort, applies the divide-and-conquer paradigm. The idea is to select a pivot element: each element before the pivot must have a value that is less than the pivot's one; each element after the pivot must have a value that is greater than the pivot's one. So, when need a specific function that allows us to rearrange the elements of the array in such order.

The partition function is here to help us. This is the pseudocode:

```
partition (A, p, r) {
x = A[r]
i = p - 1
for j = p to r - 1
if A[j] <= x
i = i + 1
swap A[i] with A[j]
swap A[i + 1] with A[r]
return i + 1
}
```

This is the **Lomuto partition scheme** algorithm. It takes the last element as default pivot. It cycles through the whole list (Θ(n)) using two indices, `i`

and `j`

. The value of `j`

is increased by 1 with each loop. i changes each time the `A[j]`

current value is greater than the pivot's value. At the end of the function we change the pivot position and we end up with the array as we wanted it. The partition function returns the index of the pivot.

The other partition algorithm you may use is the **Hoare partition scheme**. Hoare operates with two indices, one at the end of the array and one at the start. These two indices move toward each other. When they detect a pair of elements, one greater than or equal to the pivot, one lesser or equal, that are in the wrong order relative to each other the algorithm swaps the two values. When the indices meet, the function returns the final index, which is the index of the pivot.

## How to implement it, the pseudocode

Now we need to implement the code that will allow our PC to compute each step for us. This is how the pseudocode of the actual quicksort function looks like:

```
quicksort (A, p, r) {
if p < r
q = partition(A, P, r)
quicksort(A, p, q - 1)
quicksort(A, q + 1, r)
}
```

The idea is to divide the array recursively: each time we call the function we select a new pivot and we order the array with the `partition`

function. When we reach the end condition, which is `p < r`

, it means we cannot divide the array any more because we have only one element in the partition. The quicksort function will return us the array already sorted.

## Complexity

The running time of quicksort depends on whether the partitioning is balanced or unbalanced. If the partitioning is balanced, the algorithm runs asymptotically as fast as merge sort. If the partitioning is unbalanced, however, it can run asymptotically as slowly as insertion sort.

#### Best case

In the best case, the `partition`

function produces two subproblems, each of size no more than n/2, since one is of size `[n/2]`

and one of size `[n/2]-1`

. In such case, the algorithm has an asymptotically running time of Θ(n *log* n). This makes the quicksort one of the fastest sorting algorithms in the best case.

#### Worst case

The worst case occurs when the partition function returns one subarray with `n - 1`

elements and another with `0`

elements.

Thus, if the partitioning is maximally unbalanced at every recursive level of the algorithm, we are in the worst possible case which running time is Θ(n^{2}). **In such situation, the quicksort runs as fast as the insertion sort**.

#### Average case

In the average case, `partition`

produces a mix of “good” and “bad” splits. Thus, it is possible to demonstrate that in the average case the running time of the quicksort algorithm is still Θ(n *log* n) just like the best case. That condition makes the quicksort **the faster algorithm** in the average case and that is why is usually the algorithm of choice for most of the sort function included in languages libraries.

## Where do we use Quicksort?

Due to his fast average case running time, quicksort is often used to sort a generic array in most languages. The basic version of the algorithm, which we explained in this article, operates with a 2-way partitioning solution. You can find several ways to optimize the algorithm even more (what about a 3-way partitioning?).

## Programming languages implementation

### C++

```
#define SIZE 8
// Exchange values at position p1 and p2
void swap(int A[], int p1, int p2) {
int temp = A[p1];
A[p1] = A[p2];
A[p2] = temp;
}
// Divide the array and return the pivot index
int partition(int A[], int p, int r) {
// Get current pivot position (last position)
int x = A[r];
int i = p - 1;
// Reorder the array based on the pivot value
for (int j = p; j < r; ++j) {
// If selected value is less than pivot's
if (A[j] <= x) {
++i;
swap(A, i, j);
}
}
// Put pivot in its position and return its index
swap(A, i + 1, r);
return i + 1;
}
// Call quicksort recursively
void quicksort(int A[], int p, int r) {
if (p < r) {
// Get the pivot
int q = partition(A, p, r);
// Order two subarrays recursively
quicksort(A, p, q - 1);
quicksort(A, q + 1, r);
}
}
int main() {
// Array to sort
int array[SIZE] = {2, 6, 3, 8, 7, 1, 5, 4};
quicksort(array, 0, SIZE);
return 0;
}
```

## Reverse order sorting

To sort the list from the greatest to the smallest number (reverse sorting) you just need to change the partition function to make all the values before the pivot to be greater than the pivot's value and all the values after the pivot to be less than the pivot's value. To do that we just need to change `A[j] <= x `

with `A[j] >= x`

, that simple!

```
partition (A, p, r) {
x = A[r]
i = p - 1
for j = p to r - 1
if A[j] >= x
i = i + 1
swap A[i] with A[j]
swap A[i + 1] with A[r]
return i + 1
}
```

This was the quicksort. You can check out also the previous two articles about the insertion sort and the merge sort on our website. Stay tuned and subscribe to our newsletter to be notified about our next sorting explanation.