All software developers know about Big O. For instance, contemporary sorting algorithms usually have \mathcal{O}(n\log{}n) complexity. Something like \mathcal{O}(n\log{}n) looks like a function of a function, but it’s really a classification system.

We start by defining the complexity of an algorithm to be an estimate of the number of steps required to solve a problem. This count is going to vary by the size of the problem, and so the complexity must be a function of the problem size.

Sometimes this can result in a nasty looking functions like f(x)=3x^2+2x+7 or f(x)=5x^2-999. We say that both of these functions are in the category \mathcal{O}(x^2) because they have similar growth properties. This is called Bachmann-Landau notation.

In mathematics, this notation describes the asymptotic behavior of a function. if f(n) = \mathcal{O}(g(n)), then we have a guarantee that after a certain point, the graph of f(n) will fall below the graph of g(n), if you’re willing to rotate g(n) a bit. In computer science, we use this to understand how the performance of an algorithm will change as the size of the problem or input grows. There are actually several Bachmann-Landau notations, each of which gives us some kind of indication on the limits of the growth of a function.

Name Notation Analogy
Big O f(n) is \mathcal{O}(g(n)) f(n) \le g(n)
Little o f(n) is o(g(n)) f(n) < g(n)
Big Omega f(n) is \Omega(g(n)) f(n) \ge g(n)
Little omega f(n) is \omega(g(n)) f(n) > g(n)
Theta f(n) is \Theta(g(n)) f(n) = g(n)

Credit goes to MIT for the excellent table and analogies. The definitions of each can be found in the referenced material.

Additionally, some of these definitions can be expressed using calculus. We start with the definition of o in the material says that f(n) is o(g(n)) if :

\displaystyle  \forall C > 0 , \exists k \,|\, ( x > k \rightarrow |f(n)| \le C|g(n)| )

That’s a symbolic mouthful. It means, “For all positive values of C, there exists some k such that, if x>k, the absolute value of f(n) is less than or equal to the absolute value of g(n) multiplied by C.

Separately, any calculus text will give this definition for a limit at infinity. I like Paul’s Online Calculus Notes. It says that \lim_{x\to\infty} f(x) = L if

\displaystyle  \forall \epsilon > 0, \exists N \forall x>N : |f(x)-L| < \epsilon

Again in plain English, “For any positive value of \epsilon, there must exist some value for N such that for any value of x that is greater than N, this inequality holds.”

Let’s make some substitutions of variable names in the above equation. If we change out f(x) for f(x)/g(x), 0 for L, C for \epsilon, n for x, and k for N :

\displaystyle  \forall C > 0, \exists k  \forall n>k : |f(n)/g(n)| < C

or

\displaystyle  \forall C > 0, \exists k  \forall n>k : |f(n)| < C |g(n)|

This boils down to an alternative definition of o. f(n) is o(g(n)) if :

\displaystyle  \lim_{x\to\infty} f(x)/g(x) = 0

This allows us to use other mathematical tricks, like L’Hopitals Rule, to figure out if two given functions follow this relationship.

Leave a Reply