Examples
Simplifying Runtime Examples
Let’s put the previous rules into practice. We will find the Big-O runtime of various runtime methods.
Example 1
T(N) = 45n + 36
= O(n + 36) by rule (2) we can ignore the coefficient of n
= O(n) by rule (3) we only consider the most expensive part of a sum
Example 2
T(N) = 5n + 10logn + nlogn
= O(n + logn + nlogn) by rule (2) we can ignore the coefficient of n
Now, here we have to figure out which part of this equation is the most expensive. We’re using rule (1). n is O(n). logn is O(n).
What about nlogn? Let’s use rule (1) again. Since logn is O(n), we could estimate this runtime as O(n*n) = O(n2). Thus, nlogn is NOT O(n), but is O(n2). This means nlogn is the most expensive part of this function.
Remember that Big-O is an upper bound. We’ve shown that n and logn can be bounded by n (within some constant), but nlogn cannot be bounded by n (when n gets large enough, of course).
This means
= O(n + logn + nlogn)
= O(nlogn) by rule (3)
Example 3
T(N) = 2,450 n3 + n2 + 10,000n + 345,678,983
= O(n3 + n2 + n) by rule (2) the coefficients don’t matter
= O(n3 ) by rule (3)
Example 4
T(N) = 45n + n log n + 3n2 + n3 + 2n
= O(n + n log n + n2 + n3 + 2n) by rule (2)
= O(n log n + n2 + n3 + 2n) by rule (3)
= O( n3 + 2n) by rule (3)
= O( 2n) by rule (3)
The steps above are outlined specifically to show which parts grow most slowly (n is the slowest, n logn and n2 are roughly equal but less than n3 and 2n, etc).
We’ve dealt with polynomials in this introduction to Big-O. More complicated functions require more complicated proofs and techniques, but for our introduction this is sufficient. You will learn how to handle more complicated methods in your Algorithms or Discrete Math class.
Determining Algorithms’ Big-O Runtime
In the examples below, we will represent the approximate runtime function as T(N).
Example 1
We begin with an analysis of a simple assignment to an integer variable.
int a = b;
Because the assignment statement will always take the same amount of operations, it is O(1).
Example 2
Consider the following loop.
for (int i = 1; i <= 10; i++) { System.out.println(i); }
This loop will always iterate 10 times and perform the same number of operations. It is not dependent on the size of any input, so its runtime is also O(1).
Example 3
Consider a different simple for loop.
int sum = 0; for (int i = 1; i <= n; i++) { sum += n; }
The first line is O(1). The for loop is repeated n times. The third line takes constant time so this gives us T(N) = 2n + 2; by simplifying rule (4 (ignore constants of n)), the total cost for executing the two lines making up the for loop is O(n). Now T(N) = n + 2. By rule (3 (more expensive part outweighs the less expensive part)), the cost of the entire code fragment is also O(n).
Example 4
We will now analyze a code fragment with several for loops, some of which are nested.
int sum = 0; // assignment statement int[] arr = new int[n]; // assignment statement for (int j = 1; j <= n; j++) { // First for loop for (int i = 1; i <= n; i++) { // is a double loop sum++; } } for (int k = 0; k < n; k++) { // Second for loop arr[k] = k; }
This code fragment has three separate statements: the assignment statements and the two for loops. Again the assignment statements take constant time; call it c. The second for loop is just like the one in Example 3, so the runtime so far of the assignment statements and the second for loop is T(N) = *c*kn* (where *k* represents the constant operations associated with the loop), which gives us O(n) time.
The first for loop is a double loop and requires a special technique. We work from the inside of the loop outward. The expression sum++ requires constant time; call it s. Because the inner for loop is executed n times, by simplifying rule (4) it has cost *sn*. The outer for loop is executed *n* times. For each of those *n* iterations, the inner loop executes *n* times, so the runtime for the nested loop is T(N) = n * (n * s). The runtime for this portion is O(n2). By simplifying rule (3), O(n) + O(n2) is simply O(n2).
Example 5
Consider this loop with different loop parameters.
int total = 0; for (int i = 1; i <= n; i*=2) { total += n; }
In this loop, *i* is doubled every iteration rather than incrementing by one every time. In this case, the question becomes, “How many times can you double *i* until it hits *n*?” The answer is log(n).
The assignment statement is constant. As per the paragraph above, the loop iterates log(n) times. The statement inside the loop is constant, but is executed log(n) times. This gives us the rough runtime of T(N) = log(n) + c. In this case, *c* represents all of the constant time statements. Using rule (3 (more expensive part outweighs the less expensive part)), this is O(log(n)).
Note that we don’t specify the base of the log. It’s assumed to be log base 2, but doesn’t actually matter. Even if *i* were tripled every iteration, it would still be O(log(n)).
Other Control Structures
What about other control statements? While loops are analyzed in a manner similar to for loops. The cost of an if statement in the worst case is the greater of the costs for the then and else clauses. This is also true for the average case, assuming that the size of n does not affect the probability of executing one of the clauses (which is usually, but not necessarily, true). For switch statements, the worst-case cost is that of the most expensive branch. For subroutine calls, simply add the cost of executing the subroutine.