While we are solving a problem, there could be different algorithms that provide us the correct answer. How can we choose the best of them? We have to learn how to compare the performance related to those algorithms, so we can choose the best to solve a particular problem. We have to consider space complexity and time complexity. We have already talked about them in the introduction to algorithms article. By the way... 

 

Time complexity

The time complexity of an algorithm quantifies the amount of time taken by an algorithm to run.

Space complexity 

The space complexity of an algorithm quantifies the amount of space in the memory taken by an algorithm to run.

As said previously on space and time complexity depends on a lot of things, like hardware, cpu, operating system, memory...etc. But we don't consider any of these factors. In order to compare algorithms, we will only consider the execution time of the latter.

Find an element in an array of integers

We have an array of integers A of length N and we want to know if a number x exists in our array. We choose the more intuitively way and we look each element of the array in order to find if it is equal to x.

 for i=1 to i = A.lenght() 

if A[i] is equal x return true

else return false

Each operation takes approximately a constant time c

C++ translation

#include < iostream >

  using namespace std;

#define N 10

int main() {

    int A[N] = {
      1,
      2,
      3,
      4,
      5,
      6,
      7,
      8,
      9,
      0
    };
    int x = 0;

    cout << "Insert the number to search" << endl;
    cin >> x;

    for (size_t i = 0; i < N; ++i) {
      if (A[i] == x) {
        cout << "Found" << endl;
      } else {
        cout << "Not found" << endl;
      }
    }

We can consider three cases

  • Best caseIn the best case, the element x is in the first position of the array. So we are going to execute only one time the for cycle.
  • Average caseIn the average case, we are talking about the hypothesis that the element x is in the middle of our array. In this case, we have to execute the for cycle halfway.
  • Worst case: In the worst case, we consider that the element x is in the last position of our array or it's not present. In this case, we have to execute the whole for cycle. So the execution cost is greater.

In order to compare algorithm complexity, we are always considering the worst case

In our example, the worst case is when we execute the for cycle N times, where N is the length of the array A. So the execution time will be

N*c+c

 Where N*c for the if condition and c for the return statement.

As we can see the total time depends on the length of A. If the length increase, the time will increase.

Because of that, we introduce the order of growth, that is a function that binds the time of execution to the length of the input.

In our example, the time of execution is linear dependent on the length of the array. We will always ignore the lower orders term because they do not affect the larger input.

Said that there are different notations that we use to describe the limiting behavior of a function.

Functions limiting behaviour

In order

Θ  notation 

Θ(g(n)) = {f(n) : there exist positive constant c1, c2 and n0 such that 0c1g(n)f(n)c2g(n) for all nn0}

 notation

We use the O notation when we have asymptotic upper bound.

 O(g(n)) = {f(n) : there exist positive constant c and n0 such that 0≤f(n) cg(n) for all nn0}

 

 Ω  notation

We use the Ω  notation when we have asymptotic lower bound.

 Ω(g(n)) = {f(n) : there exist positive constant c and n0 such that 0cg(n) f(n) for all nn0}

Theorem  

For any two functions f(n) and g(n), we have f(n) = Θ(g(n)) if and only if f(n) = O(g(n)) and f(n)=Ω(g(n))

While analizyng an algorithm we most consider the big O notation, because it means the upper bound, so the upper limit of execution time.

As said before, we will ignore the lower terms. For example

let

f(n)=3x2+5x+6

So

O(f(n))=O(3x2+5x+6)=O(x2)

Analyze an algorithm

Let's consider this example. We have an array of integers A of length N and we want to find the minimum value.

The idea is to assign to a variable that we call min the first element and compare it with all the others. If an item is lower than min than it becomes min.

min = A[0]

for i = 1 to i = A.length()=N

if A[i] is lower than min

then min equals to A[i]

return min

 

Each operation takes approximately a constant time C1, C2...Cn.

We have now to consider the number of time that each operation is executed.

Operation   Cost # of times
min = A[0]   c1 1
for i=1 to i=N   c2 N
if A[i] is lower than min   c3 N-1
then min equals to A[i]   c4 N-1
return min   c5 1

 

Let's sum up all the costs.

T(n) = c1 + Nc2 + (N-1)c3+ (N-1)c4 + c5

So

T(n) = (c2+c3+c4)N + (c1+c5-c3-c4)

In a different formula

T(n) = aN+b

The time of execution is linear and its complexity is Θ(N).

 

Comparison between algorithms

So, which algorithms are we gonna choose?

Now we can compare algorithms with their complexity as follow

Complexity comparison

But we are not going to calculate every time the complexity of an algorithm. Here you can find a table with the most known.

algorithm complexity

That's all!