Caching Results

When computations are expensive, it may be a good idea to remember past results to make future requests faster. Using a cache is quite simple as it typically translates to the pseudo-code shown in Listing 1–10.

Listing 1–10. Using a Cache

    result = cache.get(n); // input parameter n used as key
    if (result == null) {
        // result was not in the cache so we compute it and add it
        result = computeResult(n);
        cache.put(n, result); // n is the key, result is the value
    }
    return result;

The faster recursive algorithm to compute Fibonacci terms yields many duplicate calculations and could greatly benefit from memoization. For example, computing the 50,000th term requires computing the 25,000th and 24,999th terms. Computing the 25,000th term requires computing the 12,500th and 12,499th terms, while computing the 24,999th term requires computing… the same 12,500th and 12,499th terms again! Listing 1–11 shows a better implementation using a cache.

If you are familiar with Java, you may be tempted to use a HashMap as your cache, and it would work just fine. However, Android defines SparseArray, a class that is intended to be more efficient than HashMap when the key is an integer value: HashMap would require the key to be of type java.lang.Integer, while SparseArray uses the primitive type int for keys. Using HashMap would therefore trigger the creation of many Integer objects for the keys, which SparseArray simply avoids.

Listing 1–11. Faster Recursive Implementation Using BigInteger, long Primitive TypeAnd Cache

public class Fibonacci {
    public static BigInteger computeRecursivelyWithCache (int n)
    {
        SparseArray<BigInteger> cache = new SparseArray<BigInteger>();
        return computeRecursivelyWithCache(n, cache);
    }

    private static BigInteger computeRecursivelyWithCache(int n,
SparseArray<BigInteger> cache)
    {
        if (n > 92) {
            BigInteger fN = cache.get(n);
            if (fN == null) {
                int m = (n / 2) + (n & 1);
                BigInteger fM = computeRecursivelyWithCache(m, cache);
                BigInteger fM_1 = computeRecursivelyWithCache(m – 1, cache);
                if ((n & 1) == 1) {
                    fN = fM.pow(2).add(fM_1.pow(2));
                } else {
                    fN = fM_1.shiftLeft(1).add(fM).multiply(fM);
                }
                cache.put(n, fN);
            }
            return fN;
        }
        return BigInteger.valueOf(iterativeFaster(n));
    }

    private static long iterativeFaster (int n) { /* see Listing 1–5 for implementation
*/ }
}

Measurements showed computeRecursivelyWithCache(50000) takes about 20 milliseconds to complete, or about 50 fewer milliseconds than a call to computeRecursivelyFasterUsingBigIntegerAndPrimitive(50000). Obviously, the difference is exacerbated as n grows: when n equals 200,000 the two methods complete in 50 and 330 milliseconds respectively.

Because many fewer BigInteger objects are allocated, the fact that BigInteger is immutable is not as big of a problem when using the cache. However, remember that three BigInteger objects are still created (two of them being very short-lived) when fN is computed, so using mutable big integers would still improve performance.

Even though using HashMap instead of SparseArray may be a little slower, it would have the benefit of making the code Android-independent, that is, you could use the exact same code in a non-Android environment (without SparseArray).

NOTE: Android defines multiple types of sparse arrays: SparseArray (to map integers to objects), SparseBooleanArray (to map integers to booleans), and SparseIntArray (to map integers to integers).

android.util.LruCache<K, V>

Another class worth mentioning is android.util.LruCache<K, V>, introduced in Android 3.1 (codename Honeycomb MR1), which makes it easy to define the maximum size of the cache when it is allocated. Optionally, you can also override the sizeOf() method to change how the size of each cache entry is computed. Because it is only available in Android 3.1 and later, you may still end up having to use a different class to implement a cache in your own application if you target Android revisions older than 3.1. This is a very likely scenario considering Android 3.1 as of today represents only a very small portion of the Android devices in use. An alternative solution is to extend java.util.LinkedHashMap and override removeEldestEntry. An LRU cache (for Least Recently Used) discards the least recently used items first. In some applications, you may need exactly the opposite, that is, a cache that discards the most recently used items first. Android does not define such an MruCache class for now, which is not surprising considering MRU caches are not as commonly used.

Of course, a cache can be used to store information other than computations. A common use of a cache is to store downloaded data such as pictures and still maintain tight control over how much memory is consumed. For example, override LruCache's sizeOf method to limit the size of the cache based on a criterion other than simply the number of entries in the cache. While we briefly discussed the LRU and MRU strategies, you may want to use different replacement strategies for your own cache to maximize cache hits. For example, your cache could first discard the items that are not costly to recreate, or simply randomly discard items. Follow a pragmatic approach and design your cache accordingly. A simple replacement strategy such as LRU can yield great results and allow you to focus your resources on other, more important problems.

We've looked at several different techniques to optimize the computation of Fibonacci numbers. While each technique has its merits, no one implementation is optimal. Often the best results are achieved by combining multiple various techniques instead of relying on only one of them. For example, an even faster implementation would use precomputations, a cache mechanism, and maybe even slightly different formulas. (Hint: what happens when n is a multiple of 4?) What would it take to compute FInteger.MAX_VALUE in less than 100 milliseconds?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.220.219.33