Complexity Notation

In the previous section, we have seen how we can use the big O notation to measure the runtime performance to our algorithms in proportion to the input size. We have neither examined in detail what O(n) really means nor have we considered the performance of our algorithm in relation to the type of input it's given.

Consider the following code snippet. The method accepts an array containing a string and searches for a match. If one is found, the index of the array is returned. We will use this example and try to measure the runtime complexity. The code is as follows:

public int search(String strToMatch, String[] strArray) {
for (int i = 0; i < strArray.length; i++) {
if (strArray[i].equals(strToMatch)) {
return i;
}
}
return -1;
}
Snippet 1.3: Array search. Source class name: ArraySearch.

Go to
https://goo.gl/egw1Sn to access the code. 

There are a number of operations happening inside the loop. The obvious ones are the arrays accessing at i and the string equals. However, we also have the increment of i, the assignment of the new incremented value to i and the comparison of i being less than the length of the array. However, this is not the end of the story. The equals() method is also matching each character of the string to an element at i in the array.

The following table lists all these operations:

Operation name Code Count
Array access strArray[i] 1
String equality .equals(strToMatch) String length
Array pointer increment and assignment i = i + 1 2
Reading array length and comparing to pointer i < strArray.length 2
Table 1.5: Operations performed in the ArraySearch method for every item

We have seen that for each processed item in the search array, we perform 5 + m operations, where m is the length of the search string. The next aspect to look at is to work out how often we perform this. The number of times we perform the operations mentioned in Table 1.5 doesn't just rely on the length of our input; it also depends on how quick we are in finding our match in the input array, that is, it depends on the actual array's contents.

The best case of an algorithm is when the input causes the algorithm to perform in the most efficient manner possible. The worst case is the opposite, which is when a particular input makes it behave in the least efficient manner possible.

If we are lucky and the item we are searching for is located in the first element of the search array, we perform only 5 + m operations. This is the best case and is the fastest manner our search can compute.

The worst case of this algorithm is either when our item is at the end of the array or when the item is not found at all. Both of these scenarios will have us then check the entire contents of the array. In the worst case, we end up performing n(5 + m) operations, where n is the array size.

In this example, we can say that the worst-case runtime complexity for our algorithm is O(mn) and our best case, when our algorithm finds a match immediately, is O(m). We will see in the following sub-section how we arrive at this result from 5 + m and n(5 + m).

Another algorithmic analysis that is commonly used is the average case performance. The average case complexity can be found by averaging the performance over all possible inputs. This is useful, as in certain scenarios, the worst case has a low chance of occurring.

Although we have the best, average, and worst-case complexities, the worst case is usually the most used when measuring and comparing different algorithms to one another. Apart from runtime performance, the other most common use of big O notation is to measure memory requirements. However, it can be used for any resource, such as disk space or network usage.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.14.82.217