Adding caching and clustering

First, we will start by adding a cache to our server. We do not want to constantly recompile pages that we have already compiled before. To do this, we will implement a class that surrounds a map. This class will keep track of 10 files at a time. We will also implement the timestamp when the file was last used. When we reach the eleventh file, we will see that it is not in the cache and that we have hit the maximum number of files we can hold in the cache. We will replace the compiled page with the earliest timestamped file.

This is known as a Least Recently Used (LRU) cache. There are many other types of caching strategies out there, such as a Time To Live (TTL) cache. This type of cache will eliminate files that have been in the cache for too long. This is a great type of cache for when we keep using the same files over and over again, but we eventually want to free up space when the server has not been hit for a while. An LRU cache will always keep these files in place, even if the server has not been hit for hours. We could always implement both caching strategies, but we will just implement the LRU cache for now.

First, we will create a new file called cache.js. Inside of here, we will do the following:

  1. Create a new class. We don't need to extend any other class since we are just writing a wrapper around the Map data structure built into JavaScript, as shown in the following code block:
export default class LRUCache {
#cache = new Map()
}
  1. We will then have a constructor that will take in the number of files that we want to store in the cache before we use our strategy to replace one of the files, like this:
#numEntries = 10
constructor(num=10) {
this.#numEntries = num
}
  1. Next, we will add the add operation to our cache. It will take in the buffer form of our page and the URL that we hit to get it. The key will be the URL, and the value will be the buffer form of our page, as shown in the following code block:
add(file, url) {
const val = {
page : file,
time : Date.now()
}
if( this.#cache.size === this.#numEntries ) {
// do something
return;
}
this.#cache.set(url, val);
}
  1. Then, we will implement the get operation, whereby we try to grab a file based on the URL. If we do not have it, we will return null. If we do retrieve a file, we will update the time, since this would be considered the latest page grab. This is illustrated as follows:
get(url) {
const val = this.#cache.get(url);
if( val ) {
val.time = Date.now();
this.#cache.set(url, val);
return val.page;
}
return null;
}
  1. Now, we can update our add method's if statement. If we are at the limit, we will iterate through our map and see what the shortest time is. We will remove that file and replace it with the newly created one, like this:
if( this.#cache.size === this.#numEntries ) {
let top = Number.MAX_VALUE;
let earliest = null;
for(const [key, val] of this.#cache) {
if( val.time < top ) {
top = val.time;
earliest = key;
}
}
this.#cache.delete(earliest);
}

We now have a basic LRU cache in place for our files. To attach this to our server, we will need to put it in the middle of our pipeline:

  1. Let's head back into the main file and import this file:
import cache from './cache.js'
const serverCache = new cache();
  1. We will now change a bit of the logic in our stream handler. If we notice the URL is something that we have in the cache, we will just grab the data and pipe it into our response. Otherwise, we will compile the template, set it in our cache, and stream the compiled version down, like this:
const cacheHit = serverCache.get(p);
if( cacheHit ) {
stream.end(cacheHit);
} else {
const file = fs.createReadStream('./template/main.html');
const tStream = new LoopingStream({
dir: templateDirectory,
publish: publishedDirectory,
vars : { /* shortened for readability */ },
loopAmount : 2
});
file.pipe(tStream);
tStream.once('data', (data) => {
serverCache.add(data, p);
stream.end(data);
});
}

If we try to run the preceding code, we will now see that we grab files from the cache if we hit the same page twice; and if we hit them for the first time, it will compile through our template stream and then set it in the cache.

  1. To make sure that our replace strategy works, let's go ahead and set the size of the cache to only 1, and see if we constantly replace the file if we hit a new URL, as follows:
const serverCache = new cache(1);

If we now log our cache when each method is hit, we will now see that we are replacing the file when we hit a new page, but if we stay on the same page, we are just sending the cached file back.

Now that we have added caching, let's add one more piece to our server so, that way, we can handle a lot of connections. We will be adding in the cluster module, just as we did in Chapter 6, Message Passing – Learning about the Different Types. We'll proceed as follows:

  1. Let's import the cluster module in the main.js file:
import cluster from 'cluster'
  1. We will now have the initialization of the server in our main process. For our other processes, we will process the requests.
  2. Now, let's change our strategy to handle the incoming requests inside of our child processes, like this:
if( cluster.isMaster ) {
const numCpus = os.cpus().length;
for(let i = 0; i < numCpus; i++) {
cluster.fork();
}
cluster.on('exit', (worker, code, signal) => {
console.log(`worker ${worker.process.pid} died`);
});
} else {
const serverCache = new cache();
// all previous server logic
}

With this single change, we are now handling the requests between four different processes. Just as we learned in Chapter 6, Message Passing – Learning about the Different Types, we can share a single port for our cluster module.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.37.191