Before diving into these methods we suggest you to learn and be familiar with what complexity and running time of an algorithm are. We recommend you to read our previous article about Space and Time Complexity.

## Substitution method

The first method we are going to present is the substitution method. This method comprises two steps:

- Guess the form of the solution
- Use mathematical induction to find the constraints and show that the solution works

We can use the substitution method to establish either upper or lower bounds on a recurrence. Once we found the recurrence of our algorithm we make a guess about what its bound could be and then we try to prove our hypothesis with this method.

As an example, let us determine an upper bound on the recurrence

We guess the solution is `T(n) = Ο(n lg n)`

.

We want to prove that `T(n) ≤ cn lg n`

(remember, it is an upper bound, each value should be less-or-equal that our guessed solution for an appropriate choice of the constant `c > 0`

). We start by assuming that this bound holds for all positive `m < n`

, in particular for `m = [n / 2]`

:

where the last step holds as long as `c ≥ 1`

.

Mathematical induction now requires us to show that our solution holds for the boundary conditions. With `T(1) = 1`

, we derive from the recurrence that `T(2) = 4`

and `T(3) = 5`

. Now in order to complete the inductive proof we just need to choose the value of `c`

large enough so that `T(2) = c2 lg 2`

and `T(3) = c3 lg 3`

. As it turns out, any choice of `c ≥ 2`

suffices for the base cases of `n = 2`

and `n = 3`

to hold. For most of the recurrences we shall examine, it is straightforward to extend boundary conditions to make the inductive assumption work for small n, and we shall not always explicitly work out the details.

#### Make a good guess

The main problem with the substitution method is that we need to make a guess that works in order to find the right solution of our recurrence. The main idea is to try to recognize and associate a new recurrence solution with a similar one we solved previously. Usually, we can assume that the same solution holds for a slightly different recurrence with nearly the same values.

#### Possible mistakes

- We always have to make sure that our solution works for any value we assign to the constant
`c`

. If we find our solution off only by a constant value, we overcome our difficulty by subtracting a lower-order term from our previous guess. (`T(n) ≤ cn`

becomes`T(n) ≤ cn - d`

). - We always have to prove the exact form of the inductive hypothesis, if we cannot find this exact form we can't say we have proven our hypothesis.
- Sometimes we might be tempted to try a larger guess. Although we can make this larger guess work, we usually find that there could be a better lower guess as a solution.

## Recursion tree

In a recursion tree, each node represents the cost of a single subproblem somewhere in the set of recursive function invocations. We sum the costs within each level of the tree to obtain a set of per-level costs, and then we sum all the per-level costs to determine the total cost of all levels of the recursion.

We determine the cost at each level of the tree, then we add up the costs over all levels to determine the cost for the entire tree. In such way, we can determine a good guess that we need to confirm using the substitution method explained above.

## Master method

Through the master method we are able to directly solve recurrences of the form

where `a ≥ 1`

and `b > 1`

are constant and `ƒ(n)`

is an asymptotically positive function.

We need to memorize three simple cases that will provide us with a direct solution for such recurrences.

- If ƒ(n) = Ο(n
^{logba-ε}) for some constant ε > 0, then T(n) = Θ(n^{logba}) - If ƒ(n) = Θ(n
^{logba}), then T(n) = Θ(n^{logba }lg n) - If ƒ(n) = Ω(n
^{logba+ε}) for some constant ε > 0, and if a ƒ(n / b) ≤ c ƒ(n) for some constant c < 1 and all sufficiently large n, then T(n) = Θ(ƒ(n))