Big O is a topic that is often stressed in Computer Science. Many students walk away from a Big O lecture with a fuzzy (at best) understanding.

So lets spend some time trying to explain what Big O is and what it is not. Big O is not a way to compare the absolute speed or run time of multiple programs. It is not a magic bullet that will enusre your code is fast enough.

Big O does provide a way to compare "relative" efficiency of multiple algorithms. It does this by ignoring the discrete computaions (and their run time) and instead looks at how many times something will be done. At first glance, Big O may seem very arbitrary.

Big O attempts to classify how many times your code will repeat a computation and then take that and match it to a list of known measurements (called Complexity Classes). The goal most of the time is to determine the Big O (Complexity Class) of a series of algorithms and then choose to implement the algorithm with the simplest (hence fastest) complexity. Here is the list of Complexity Classes (from simplest/fastest to most complicated/slowest):

**O(**The running time of a constant-time algorithm does not depend on the input size. A typical constant-time algorithm is a direct formula that calculates the answer.`1`)**O(log**A logarithmic algorithm often halves the input size at each step. The running time of such an algorithm is logarithmic, because log`n`)equals the number of times n must be divided by 2 to get 1._{2}n**O(**A square root algorithm is slower than O(log`√n`)`n`) but faster than O(`n`). A special property of square roots is that`√n = n/√n`, so the square root`√n`lies, in some sense, in the middle of the input.**O(**A linear algorithm goes through the input a constant number of times. This is often the best possible time complexity, because it is usually necessary to access each input element at least once before reporting the answer.`n`)**O(**This time complexity often indicates that the algorithm sorts the input, because the time complexity of efficient sorting algorithms is O(`n`log`n`)`n`log`n`). Another possibility is that the algorithm uses a data structure where each operation takes O(log`n`) time.**O(**A quadratic algorithm often contains two nested loops. It is possible to go through all pairs of the input elements in O(`n`)^{2}`n`) time.^{2}**O(**A cubic algorithm often contains three nested loops. It is possible to go through all triplets of the input elements in O(`n`)^{3}`n`) time.^{3}**O(**This time complexity often indicates that the algorithm iterates through all subsets of the input elements. For example, the subsets of`2`)^{n}`{1,2,3}`are`∅`,`{1}`,`{2}`,`{3}`,`{1,2}`,`{1,3}`,`{2,3}`and`{1,2,3}`.**O(**This time complexity often indicates that the algorithm iterates through all permutations of the input elements. For example, the permutations of`n!`)`{1,2,3}`are`{1,2,3}`,`{1,3,2}`,`{2,1,3}`,`{2,3,1}`,`{3,1,2}`and`{3,2,1}`.

*Determing Big O is the process of examing an algorithm, determing its
most complex area and then classifying that complexity.*

The above statement is important. If an algorithm has multiple phases/sections, you examine each section. The Big O of the algorithm is the Big O of the most complex section. At first this does not seem to make sense, but think about it. If an algorithm has several sections such as this:

- loop O(
`n`) - loop O(
`n`)^{2} - loop O(
`n`)

`n`) + O(

`n`) + O(

^{2}`n`) or O(

`n`) + 2 * O(

^{2}`n`). For large values of

`n`, 2 * O(

`n`) is much smaller than O(

`n`). Since the goal of Big O is to determine a rough approximation, the smaller part is ignored and the Big O of the algorithm is O(

^{2}`n`).

^{2}