Overview
This chapter explores asynchronous programming in TypeScript using promises and discusses uses for asynchronous programming and how it is implemented in single-threaded JavaScript with the event loop. By the end of the chapter, you should have a solid understanding of how promises work and how TypeScript can enhance them. You will also be able to build a promise-based app using the concepts taught in this chapter.
In the previous chapter, we learned about asynchronous programming using callbacks. With this knowledge, we can manage concurrent requests and write non-blocking code that allows our applications to render web pages faster or serve concurrent requests on a Node.js server.
In this chapter, we will learn how promises allow us to write more readable, concise code to better manage asynchronous processes and forever escape deep callback nesting, sometimes known as "callback hell." We will explore the evolution of the Promise object and how it eventually became part of the JavaScript language. We'll look at different transpilation targets for TypeScript and how TypeScript can enhance promises and allow developers to leverage generics to infer return types.
We will work on some practical exercises, such as managing multiple API requests from a website and managing concurrency in Node.js. We will use the Node.js FileSystem API to perform asynchronous operations on files and see how powerful asynchronous programming can be.
As we've learned, a callback is a function that is given as an argument to another function, in effect saying, "do this when you are done." This capability has been in JavaScript since its inception in 1995 and can work very well, but as the complexity of JavaScript applications grew through the 2000s, developers found callback patterns and nesting in particular to be too messy and unreadable, giving rise to complaints about "callback hell" as shown in the following example:
doSomething(function (err, data) {
if(err) {
console.error(err);
} else {
request(data.url, function (err, response) {
if(err) {
console.error(err);
} else {
doSomethingElse(response, function (err, data) {
if(err) {
console.error(err);
} else {
// ...and so it goes!
}
})
}
})
}
});
In addition to making code more readable and concise, promises have advantages beyond callbacks in that promises are objects that contain the state of the resolving asynchronous function. This means that a promise can be stored and either queried for the current state or called via its then() or catch() methods at any time to obtain the resolved state of the promise. We'll discuss those methods later in this chapter, but it's worth calling out at the beginning here that promises are more than syntactic sugar. They open up entirely new programming paradigms in which event handling logic can be decoupled from the event itself, just by storing the event in a promise.
Promises are not unique to JavaScript but were first proposed as a computer programming concept in the 1970s.
Note
For more information, refer to Friedman, Daniel; David Wise (1976). The Impact of Applicative Programming on Multiprocessing. International Conference on Parallel Processing. pp. 263–272.
As web frameworks gained popularity, proposals for promises started to appear in 2009 and libraries such as jQuery started implementing promise-like objects in 2011.
Note
For more information, refer to the following: https://groups.google.com/g/commonjs/c/6T9z75fohDk and https://api.jquery.com/category/version/1.5/
It wasn't long before Node.js started to have some promise libraries as well. Google's AngularJS bundled the Q library. All of these libraries wrapped callbacks in a higher-level API that appealed to developers and helped them to write cleaner and more readable code.
In 2012, promises were proposed as an official specification in order to standardize the API. The specification was accepted in 2015 and has since been implemented in all major browsers as well as Node.js.
Note
For more details, refer to http://www.ecma-international.org/ecma-262/6.0/#sec-promise-constructor.
"Promisification," the ability to wrap an existing asynchronous function in a promise, was added to many libraries and became part of the util package in the standard Node.js library as of version 8.0 (released in 2017).
TypeScript, as a superset of JavaScript, will always support native language features such as promises; however, TypeScript does not provide polyfills, so if the target environment doesn't support native promises, a library is required.
Most JavaScript runtimes (such as a web browser or Node.js server) are single-threaded execution environments. That means the main JavaScript process will only do one thing at a time. Thanks to the event loop, the runtime will seem like it's capable of doing many things at once as long as we write non-blocking code. The event loop recognizes asynchronous events and can turn to other tasks while it waits for those events to resolve.
Consider the example of a web page that needs to call an API to load data into a table. If that API call were blocking, then that would mean the page render couldn't complete until the data loaded. Our user would have to stare at a blank page until all the data loaded and page elements rendered. But because of the event loop, we can register a listener that allows rendering of the website to continue and then load the table when our data is finally returned. This is visualized in the following figure:
Figure 12.1: A typical event loop
This can be implemented using callbacks or promises. The event loop is what makes this possible. Node.js works similarly, but now we may be responding to requests from a multitude of clients. In this simple example, three different requests are being made:
Figure 12.2: Multiple requests
The API is not blocking so additional requests can come in even when the initial one has not been served. The requests are served in the order the work is completed.
A promise is a JavaScript object that can exist in three states: pending, fulfilled, or rejected. Although promises can be instantly fulfilled or rejected, it is most typical for a promise to be created in a pending state and then resolved to be fulfilled or rejected as an operation succeeds or fails. Promises are chainable and implement several convenience methods that we'll go into.
To understand the states of a promise better, it's important to know that the states of a promise cannot be queried. As a programmer, we do not check the state of the promise and take action based on that state. Rather we provide a function callback that will be invoked when the promise reaches that state. For example, we make an HTTP request to our backend server and get a promise in response. Now we have set up our event and we merely need to tell the promise what to do next and how to handle any errors. Examples of this will follow.
A promise can be instantiated using the new keyword and Promise constructor. When instantiated in this way, Promise expects a callback argument that contains the actual work to be done. The callback has two arguments of its own, resolve and reject. These arguments can be called explicitly to either resolve or reject the promise. For example, we can create a promise that resolves after 100 ms like this:
new Promise<void>((resolve, reject) => {
setTimeout(() => resolve(), 100);
});
We could also create a promise that rejects after 100 ms:
new Promise<void>((resolve, reject) => {
setTimeout(() => reject(), 100);
});
Promises can be chained into callback functions of their own using then and catch. The callback function given to then will fire only once the promise is fulfilled and the callback function given to catch will only fire if the promise is rejected. Most libraries that return promises will automatically call resolve and reject, so we only need to provide then and catch. Here's an example using the Fetch API:
fetch("https://my-server.com/my-resource")
.then(value => console.log(value))
.catch(error => console.error(error));
This code will make a call to our backend server and log out the result. If the call fails, it'll log that too.
If this were a real application, we might have a couple of functions, showData and handleError, that could manage what our application does with the response from the server. In that case, the use of fetch would likely be something like this:
fetch("https://my-server.com/my-resource")
.then(data => showData(data))
.catch(error => handleError(error));
Using promises like this shows how we can decouple our asynchronous processes from business logic and display elements.
A pending promise is one that has yet to complete its work. It's simple to create a promise that is forever stuck in a pending state:
const pendingPromise = new Promise((resolve, reject) => {});
console.log(pendingPromise);
This promise will never do anything as neither resolve nor reject are ever called. The promise will remain in a pending state. If we execute this code, it'll print out Promise { <pending> }. As noted above, we do not query the state of a promise but rather provide a callback for the eventual resolution of a promise. The sample code above contains a promise that can never be resolved and as such could be seen as invalid code. There is no use case for promises that cannot resolve.
We can create a promise that is fulfilled immediately:
const fulfilledPromise = new Promise(resolve => {
resolve("fulfilled!");
});
console.log(fulfilledPromise);
This will log out Promise { 'fulfilled!' }.
Unlike the pending state, creating a promise that resolves immediately has a few more practical use cases. The primary use of an immediately resolved promise would be when working with an API that expects a promise.
We can create a promise that is fulfilled immediately:
const rejectedPromise = new Promise((resolve, reject) => {
reject("rejected!");
});
console.log(rejectedPromise);
This will log out Promise { <rejected> 'rejected!' } and then throw an unhandled promise rejection warning. Rejected promises always need to be caught. Failure to catch a promise rejection may cause our program to crash!
As with the fulfilled state, the primary use case for immediately rejecting a promise would be for writing a good unit test, but there may be secondary use cases in which some process throws an error during an asynchronous workflow and it may make sense to return a rejected promise. This circumstance would be most likely when working with a third-party library where the API isn't quite to our liking and we need to wrap it with something more in line with the rest of our application architecture.
One of the main advantages of promises over callbacks is the ability to chain promises together. Consider a function that waits 1 second, generates a random number between 0 and 99, and adds it to the previous result. There are better ways to write recursive functions, but this is meant to simulate a website making several calls to a backend:
Example01.ts
1 const getTheValue = async (val: number, cb: Function) => {
2 setTimeout(() => {
3 const number = Math.floor(Math.random() * 100) + val;
4 console.log(`The value is ${number}`);
5 cb(number);
6 }, 1000);
7 };
8
9 getTheValue(0, (output: number) => {
10 getTheValue(output, (output: number) => {
11 getTheValue(output, (output: number) => {
12 getTheValue(output, (output: number) => {
13 getTheValue(output, (output: number) => {
14 getTheValue(output, (output: number) => {
15 getTheValue(output, (output: number) => {
16 getTheValue(output, (output: number) => {
17 getTheValue(output, (output: number) => {
18 getTheValue(output, () => {});
19 });
20 });
21 });
22 });
23 });
24 });
25 });
26 });
27 });
Link to the example: https://packt.link/VHZJc
A sample output of this program is the following:
The value is 49
The value is 133
The value is 206
The value is 302
The value is 395
The value is 444
The value is 469
The value is 485
The value is 528
The value is 615
Each time we call getTheValue, we wait 1 second, then generate a random number and add it to the value we passed in. In a real-world scenario, we can think of this as a program that completes several asynchronous tasks, using the output from the last one as input to the next.
Note
As the starting point of the program is a random number, your output would be different from the one presented above.
Everything in the previous program works correctly; however, the callback nesting isn't very nice to look at and could be challenging to maintain or debug. The next exercise will teach you how you can write more readable and maintainable code using promises.
In this exercise, we will refactor the preceding example and chain promises to eliminate nesting and make the code more readable:
Note
The code file for this exercise can be found here: https://packt.link/IO8Pz.
const getTheValue = async (val: number): Promise<number> => {
return new Promise(resolve => {
setTimeout(() => {
const number = Math.floor(Math.random() * 100) + val;
console.log(`The value is ${number}`);
resolve(number);
}, 1000);
});
};
getTheValue(0)
.then((result: number) => getTheValue(result))
.then((result: number) => getTheValue(result))
.then((result: number) => getTheValue(result))
.then((result: number) => getTheValue(result))
.then((result: number) => getTheValue(result))
.then((result: number) => getTheValue(result))
.then((result: number) => getTheValue(result))
.then((result: number) => getTheValue(result))
.then((result: number) => getTheValue(result));
The nesting is gone and the code is a lot more readable. Our getTheValue function now returns a promise instead of using a callback. Because it returns a promise, we can call .then() on the promise, which can be chained into another promise call.
The value is 50
The value is 140
The value is 203
The value is 234
The value is 255
The value is 300
The value is 355
The value is 395
The value is 432
The value is 451
Note that you will get an output that is different from the one shown above because the program uses a random number as the starting point.
Chaining can also be a big help when it comes to error conditions. If my getTheValue function rejects the promise, I'm able to catch the error by chaining a single catch to the end of the chain:
Example02.ts
1 const getTheValue = async (val: number): Promise<number> => {
2 return new Promise((resolve, reject) => {
3 setTimeout(() => {
4 const number = Math.floor(Math.random() * 100) + val;
5 if (number % 10 === 0) {
6 reject("Bad modulus!");
7 } else {
8 console.log(`The value is ${number}`);
9 resolve(number);
10 }
11 }, 1000);
12 });
13 };
14
15 getTheValue(0)
16 .then((result: number) => getTheValue(result))
17 .then((result: number) => getTheValue(result))
18 .then((result: number) => getTheValue(result))
19 .then((result: number) => getTheValue(result))
20 .then((result: number) => getTheValue(result))
21 .then((result: number) => getTheValue(result))
22 .then((result: number) => getTheValue(result))
23 .then((result: number) => getTheValue(result))
24 .then((result: number) => getTheValue(result))
25 .catch(err => console.error(err));
Link to the example: https://packt.link/sBTgk
We are introducing a 10% chance (the chance our number when divided by 10 will have a remainder of 0) of throwing an error on each iteration. On average, our program will fail more often than it executes successfully now:
The value is 25
The value is 63
The value is 111
Bad modulus!
In addition to then and catch methods, the Promise object also exposes a finally method. This is a callback function that will be called regardless of whether an error is thrown or caught. It's great for logging, closing a database connection, or simply cleaning up resources, regardless of how the promise is eventually resolved.
We can add a finally callback to the above promise:
Example03.ts
1 const getTheValue = async (val: number) => {
2 return new Promise<number>((resolve, reject) => {
3 setTimeout(() => {
4 const number = Math.floor(Math.random() * 100) + val;
5 if (number % 10 === 0) {
6 reject("Bad modulus!");
7 } else {
8 console.log(`The value is ${number}`);
9 resolve(number);
10 }
11 }, 1000);
12 });
13 };
14
15 getTheValue(0)
16 .then(result => getTheValue(result))
17 .then(result => getTheValue(result))
18 .then(result => getTheValue(result))
19 .then(result => getTheValue(result))
20 .then(result => getTheValue(result))
21 .then(result => getTheValue(result))
22 .then(result => getTheValue(result))
23 .then(result => getTheValue(result))
24 .then(result => getTheValue(result))
25 .catch(err => console.error(err))
26 .finally(() => console.log("We are done!"));
Link to the example: https://packt.link/izqwS
Now "We are done!" will be logged regardless of whether or not we trip the "Bad modulus!" error condition:
The value is 69
The value is 99
Bad modulus!
We are done!
Promise.all is one of the most useful utility methods that Promise has to offer. Even code written with async/await syntax (see Chapter 13, Async/Await) can make good use of Promise.all. This method takes an iterable (likely an array) of promises as an argument and resolves all of them. Let's see how we can change our example promise using Promise.all:
Example04.ts
1 const getTheValue = async (val: number = 0) => {
2 return new Promise<number>((resolve, reject) => {
3 setTimeout(() => {
4 const number = Math.floor(Math.random() * 100) + val;
5 if (number % 10 === 0) {
6 reject("Bad modulus!");
7 } else {
8 console.log(`The value is ${number}`);
9 resolve(number);
10 }
11 }, 1000);
12 });
13 };
14
15 Promise.all([
16 getTheValue(),
17 getTheValue(),
18 getTheValue(),
19 getTheValue(),
20 getTheValue(),
21 getTheValue(),
22 getTheValue(),
23 getTheValue(),
24 getTheValue(),
25 getTheValue()
26 ])
27 .then(values =>
28 console.log(
29 `The total is ${values.reduce((prev, current) => prev + current, 0)}`
30 )
31 )
32 .catch(err => console.error(err))
33 .finally(() => console.log("We are done!"));
Link to the example: https://packt.link/8pzx4
The output should be similar to the ones obtained for the preceding examples. In this example, we call the same function 10 times, but imagine these are 10 different API calls we need to reach and then sum the total. Each call takes approximately 1 second. If we chain a series of promises, this operation will take just over 10 seconds. By using Promise.all, we are able to run those operations in parallel and now it takes only 1 second to complete the function.
Promise.all is useful any time you can run two or more asynchronous processes in parallel. It can be useful for persisting data to multiple database tables, letting multiple independent components render in a web browser independently, or making multiple HTTP requests. A good example of making multiple HTTP requests in parallel would be a service that monitors the uptime and ping duration of other services. There's no reason such an operation would need to be synchronous and Promise.all lets us wait on several web requests within the same process.
In this exercise, instead of repeating the same function call 10 times, let's optimize the programs from the previous examples to be more DRY (don't repeat yourself). We can load up an array of promises and then use Promise.all to resolve all the promises in parallel and use catch and finally to resolve errors and ensure we return some output:
Note
The code file for this exercise can also be found here: https://packt.link/KNpqx.
const getTheValue = async (val: number = 0) => {
return new Promise<number>((resolve, reject) => {
setTimeout(() => {
const number = Math.floor(Math.random() * 100) + val;
if (number % 10 === 0) {
reject('Bad modulus!');
} else {
console.log(`The value is ${number}`);
resolve(number);
}
}, 1000);
});
};
Promise.all([
getTheValue(),
getTheValue(),
getTheValue(),
getTheValue(),
getTheValue(),
getTheValue(),
getTheValue(),
getTheValue(),
getTheValue(),
getTheValue(),
])
.then((values) =>
console.log(
`The total is ${values.reduce((prev, current) => prev + current, 0)}`
)
)
.catch((err) => console.error(err))
.finally(() => console.log('We are done!'));
In order to catch errors and make the program recursive, we'll need to wrap Promise.all in a function. Recursion is a pattern in which the same function can be called multiple times within the same execution.
const doIt = () => {
Promise.all([
getTheValue(),
getTheValue(),
getTheValue(),
getTheValue(),
getTheValue(),
getTheValue(),
getTheValue(),
getTheValue(),
getTheValue(),
getTheValue(),
])
.then((values) =>
console.log(
`The total is ${values.reduce((prev, current) => prev + current, 0)}`
)
)
.catch((err) => console.error(err))
.finally(() => console.log('We are done!'));
We can use some functional programming techniques to, rather than having an array in which getTheValue() is repeated 10 times, programmatically construct an array of 10 elements, all of which are that function call. Doing this won't change how our program operates, but it will make it a bit nicer to work with.
Promise.all(
Array(10)
.fill(null)
.map(() => getTheValue())
)
The logic here is that Array(10) creates a new array of 10 elements, fill(null) will initialize the array, then map will remap the array elements to be the getTheValue() function call.
Th above code actually calls the function and returns the pending promise to the array that is already wrapped in Promise.all.
Now we want to use recursion in the case of an error. We will change our catch() callback from simply logging the error to starting the process over again. In this case, our business rule is we want the entire set of calculations to complete and we will restart if there is an error. The code to do this is very easy as catch() expects a function as its callback so we can just pass our doIt function back to it again.
.catch(doIt)
Note that we do not invoke the callback function here. We want to pass a function and it will be invoked in the case of an error.
const getTheValue = async (val: number = 0) => {
return new Promise<number>((resolve, reject) => {
setTimeout(() => {
const number = Math.floor(Math.random() * 100) + val;
if (number % 10 === 0) {
reject('Bad modulus!');
} else {
// console.log(`The value is ${number}`);
resolve(number);
}
}, 1000);
});
};
let loopCount = 0;
const doIt = () => {
Promise.all(
Array(10)
.fill(null)
.map(() => getTheValue())
)
.then((values) =>
console.log(
`The total is ${values.reduce((prev, current) => prev + current, 0)}`
)
)
.catch(doIt)
.finally(() => console.log(`completed loop ${++loopCount}`));
};
doIt();
When we run the program, we'll see a few iterations of the program looping. The output may be something like this:
completed loop 1
The total is 438
completed loop 2
Note that depending on the number of iterations, you might get an output different from the one shown above.
This method is a variation on Promise.all, which is ideal for when it's acceptable for some of our promises to resolve successfully and some of them to be rejected. Let's see how it's different from Promise.all:
const getTheValue = async (val: number = 0) => {
return new Promise<number>((resolve, reject) => {
setTimeout(() => {
const number = Math.floor(Math.random() * 100) + val;
// Arbitrary error condition - if the random number is divisible by 10.
if (number % 10 === 0) {
reject("Bad modulus!");
} else {
console.log(`The value is ${number}`);
resolve(number);
}
}, 1000);
});
};
const generateTheNumber = (iterations: number): void => {
Promise.allSettled(
// Produces an array of `iterations` length with the pending promises of `getTheValue()`.
Array(iterations)
.fill(null)
.map(() => getTheValue())
)
.then((settledResults) => {
// Map all the results into the failed, succeeded and total values.
const results = settledResults.reduce(
(prev, current) => {
return current.status === "fulfilled"
? {
...prev,
succeeded: prev.succeeded + 1,
total: prev.total + current.value,
}
: { ...prev, failed: prev.failed + 1 };
},
{
failed: 0,
succeeded: 0,
total: 0,
}
);
console.log(results);
})
.finally(() => console.log("We are done!"));
};
generateTheNumber(10);
The program will generate output like this:
current { status: 'fulfilled', value: 85 }
current { status: 'fulfilled', value: 25 }
current { status: 'fulfilled', value: 11 }
current { status: 'fulfilled', value: 43 }
current { status: 'rejected', reason: 'Bad modulus!' }
current { status: 'fulfilled', value: 41 }
current { status: 'fulfilled', value: 81 }
current { status: 'rejected', reason: 'Bad modulus!' }
current { status: 'rejected', reason: 'Bad modulus!' }
current { status: 'fulfilled', value: 7 }
{ failed: 3, succeeded: 7, total: 293 }
We are done!
We've made a couple of enhancements here. For one thing, we are now passing the array size into generateTheNumber, which can give a bit more flavor or variation to our program. The main improvement now is the use of Promise.allSettled. Now, Promise.allSettled allows us to have a mix of successes and failures, unlike Promise.all, which will call the then() method if all the promises resolve successfully or call the catch() method if any of them fail. The output of Promise.allSettled could look something like this:
settledResults [
{ status: 'fulfilled', value: 85 },
{ status: 'fulfilled', value: 25 },
{ status: 'fulfilled', value: 11 },
{ status: 'fulfilled', value: 43 },
{ status: 'rejected', reason: 'Bad modulus!' },
{ status: 'fulfilled', value: 41 },
{ status: 'fulfilled', value: 81 },
{ status: 'rejected', reason: 'Bad modulus!' },
{ status: 'rejected', reason: 'Bad modulus!' },
{ status: 'fulfilled', value: 7 }
]
Each of the resolved promises will have a status containing the string 'fulfilled' if the promise resolved successfully or 'rejected' if there was an error. Fulfilled promises will have a value property containing the value the promise resolved to and rejected promises will have a reason property containing the error.
In the example given, we are totaling the rejected promises and summing the values of the fulfilled promises, then returning that as a new object. To perform this operation, we use the built-in array function reduce(). Now, reduce() will iterate over each element of an array and collect transformed results in an accumulator, which is returned by the function. MapReduce functions are common in functional programming paradigms.
Note that Promise.allSettled is a fairly recent addition to ECMAScript, having landed in Node.js 12.9. In order to use it, you'll need to set your compilerOptions target to es2020 or esnext in your tsconfig.json file. Most modern browsers support this method, but it's a good idea to verify support before using this recent feature.
We've seen an example of using Promise.allSettled to produce a mixed result of fulfilled and rejected promises. Now let's combine Promise.allSettled and Promise.all to aggregate multiple results of our runs of getTheValue():
Note
The code file for this exercise can also be found here: https://packt.link/D8jIQ.
Promise.all(
Array(3)
.fill(null)
.map(() => generateTheNumber(10))
);
Promise.all(
Array(3)
.fill(null)
.map(() => generateTheNumber(10))
).then((result) => console.log(result));
We log out [undefined, undefined, undefined]. That's not what we wanted. The reason for this is generateTheNumber doesn't actually return its promise – it didn't need to in the prior example.
const generateTheNumber = (iterations: number) => {
return Promise.allSettled(
Array(iterations)
.fill(null)
.map(() => getTheValue())
)
.then((settledResults) => {
const results = settledResults.reduce(
(prev, current) => {
return current.status === 'fulfilled'
? {
...prev,
succeeded: prev.succeeded + 1,
total: prev.total + current.value,
}
: { ...prev, failed: prev.failed + 1 };
},
{
failed: 0,
succeeded: 0,
total: 0,
}
);
return results;
})
.finally(() => console.log('Iteration done!'));
};
With that done we can get our output.
[
{ failed: 0, succeeded: 10, total: 443 },
{ failed: 1, succeeded: 9, total: 424 },
{ failed: 2, succeeded: 8, total: 413 },
]
const totals = results.map((r) => r.total).sort();
console.log(`The highest total is ${totals[totals.length - 1]}.`);
console.log(`The lowest total is ${totals[0]}.`);
You might get an output similar to the following:
The value is 62
The value is 77
The value is 75
The value is 61
The value is 61
The value is 61
The value is 15
The value is 83
The value is 4
The value is 23
Iteration done!
.
.
.
The highest total is 522.
The lowest total is 401.
Note that only a section of the actual output is displayed for ease of presentation.
This exercise showed us how we can filter and sort the results of many promises and create data structures that accurately reflect the state of our application.
At the other end of the spectrum from Promise.allSettled lies Promise.any. This method takes an iterable (or array) of promises, but instead of settling all of them, it will resolve to the value of the first promise that resolves successfully. Promise.any is so new it has yet to be implemented in every browser and at the time of writing is not available in the LTS version of Node.js. You should check compatibility and availability before using it.
Promise.race has been around for some time and is similar to Promise.any. Now, Promise.race again takes an iterable of promises and executes them all. The first promise that resolves or rejects will resolve or reject the race. This is in contrast to Promise.any in that if the first promise in Promise.any rejects, the other promises still have an opportunity to resolve successfully:
const oneSecond = new Promise((_resolve, reject) => {
setTimeout(() => reject("Too slow!"), 1000);
});
const upToTwoSeconds = new Promise(resolve => {
setTimeout(() => resolve("Made it!"), Math.random() * 2000);
});
Promise.race([oneSecond, upToTwoSeconds])
.then(result => console.log(result))
.catch(err => console.error(err));
In this example, one promise always rejects in 1 second while the other resolves at a random interval between 0 and 2 seconds. If the oneSecond promise wins the race, the entire promise is rejected. If upToTwoSeconds takes less than a second, then the promise resolves successfully with the message "Made It!".
A practical example of using Promise.race might be a timeout and fallback feature where if the primary web service can't respond within an expected amount of time, the application either switches to a secondary source for data or exhibits some other behavior. Or perhaps we want to deal with a slow render issue in a web browser where if a screen paint hasn't finished in the expected amount of time, we switch to a simpler view. There are lots of cases where Promise.race can ease the complexity of handling asynchronous operations in TypeScript.
The example we're working with so far specifies the type of input to the promise, but we have to provide a type for the result in each step of the chain. That's because TypeScript doesn't know what the promise may resolve to so we have to tell it what kind of type we're getting as the result.
In other words, we're missing out on one of TypeScript's most powerful features: type inference. Type inference is the ability for TypeScript to know what the type of something should be without having to be told. A very simple example of type inference would be the following:
const hello = "hello";
No type is specified. This is because TypeScript understands that the variable hello is being assigned a string and cannot be reassigned. If we try to pass this variable as an argument to a function that expects another type, we will get a compilation error, even though we never specified the type. Let's apply type inference to promises.
First, let's look at the type definition for the Promise object:
new <T>(executor: (resolve: (value?: T | PromiseLike<T>) => void, reject: (reason?: any) => void) => void): Promise<T>;
T is what's known as a generic. It means any type can be specified to take the place of T. Let's say we define a promise like this:
new Promise(resolve => {
resolve("This resolves!");
});
What we're doing here is stating the resolve argument will resolve to an unknown type. The receiving code will need to provide a type for it. This can be improved by adding a type value for T:
new Promise<string>(resolve => {
resolve("This resolves!");
});
Now the promise constructor resolves to a type of Promise<string>. When the promise becomes fulfilled, it is expected to return a type of string.
Let's examine an example where casting the return type of a promise becomes important:
const getPromise = async () => new Promise(resolve => resolve(Math.ceil(Math.random() * 100)));
const printResult = (result: number) => console.log(result);
getPromise().then(result => printResult(result));
If you put this example into an IDE such as VS Code, you'll see that you have a type error on the result parameter given to printResult. The type that the promise returned by getPromise is unknown but printResult expects number. We can fix this problem by providing a type to the promise when we declare it:
const getPromise = async () => new Promise<number>(resolve => resolve(Math.ceil(Math.random() * 100)));
const printResult = (result: number) => console.log(result);
getPromise().then(result => printResult(result));
We have added <number> immediately after our promise declaration and TypeScript knows this promise is expected to resolve to a number. This type-checking will also be applied to the resolution of our promise. For example, if we tried to resolve to a value of "Hello!", we'd get another type error now that our promise is expected to return a number.
In this exercise, we'll create a simple website with synchronous rendering and refactor it so the rendering is asynchronous:
Note
The code file for this exercise can also be found here: https://packt.link/q8rka.
npm i
We just installed TypeScript into our project as well as http-server, which is a simple Node.js HTTP server that will allow us to run our website on localhost.
Now we'll add a few files to get the project started.
<html>
<head>
<title>The TypeScript Workshop - Exercise 12.03</title>
<link href="styles.css" rel="stylesheet"></link>
</head>
<body>
<div id="my-data"></div>
</body>
<script type="module" src="data-loader.js"></script>
</html>
body {
font-family: Arial, Helvetica, sans-serif;
font-size: 12px;
}
input {
width: 200;
}
{ "message": "Hello Promise!" }
const updateUI = (message: any): void => {
const item = document.getElementById("my-data");
if (item) {
item.innerText = `Here is your data: ${message}`;
}
};
const message = fetch("http://localhost:8080/data.json");
updateUI(message);
That's all you need to run a local service with a TypeScript web application! Later in the book, we'll see some more robust solutions, but for now, this will let us focus on the TypeScript without too many bells or whistles around.
npx tsc -w data-loader.ts
npx http-server . -c-1
"Here is your data: [object Promise]".
Something hasn't worked correctly. What we want to see is "Here is your data: Hello Promise!". If we go and look at the TypeScript code, we'll see this line:
const message = fetch("http://localhost:8080/data.json");
This isn't working correctly. fetch is an asynchronous request. We are just seeing the unresolved promise and printing it to the screen.
Another warning sign is the use of any in the updateUI function. Why is the any type being used there when it should be a string? That's because TypeScript won't allow us to use a string. TypeScript knows we're calling updateUI with an unresolved promise and so we'll get a type error if we try to treat that as a string type. New developers sometimes think they are fixing a problem by using any, but more often than not they will be ignoring valid errors.
In order to get this code to work correctly, you will need to refactor it so that the promise fetch returns is resolved. When it works correctly, fetch returns a response object that exposes a data method that also returns a promise, so you will need to resolve two promises in order to display the data on your page.
Note
The fetch library is a web API for browsers that is a great improvement on the original XMLHttpRequest specification. It retains all the power of XMLHttpRequest but the API is much more ergonomic and as such is used by many web applications, rather than installing a third-party client library. fetch is not implemented in Node.js natively but there are some libraries that provide the same functionality. We'll take a look at those later in the chapter.
As stated previously, promises became part of the ECMAScript standard in 2015. Up until that point, developers used libraries such as Q or Bluebird to fill the gap in the language. While many developers choose to use native promises, these libraries remain quite popular with weekly downloads still growing. That said, we should carefully consider whether it's a good idea to depend on a third-party library over a native language feature. Unless one of these libraries provides some critical functionality that we can't do without, we should prefer native features over third-party libraries. Third-party libraries can introduce bugs, complexity, and security vulnerabilities and require extra effort to maintain. This isn't an indictment against open source.
Open source projects (such as TypeScript) are an essential part of today's developer ecosystem. That said, it's still a good idea to carefully choose our dependencies and make sure they are well-maintained libraries that are not redundant with native features.
It's also worth noting that the APIs of third-party libraries may differ from the native language feature. For example, the Q library borrows a deferred object from the jQuery implementation:
import * as Q from "q";
const deferred = Q.defer();
deferred.resolve(123);
deferred.promise.then(val => console.log(val));
This written in a native promise is more like the examples we've seen so far:
const p = new Promise<number>((resolve, reject) => {
resolve(123);
});
p.then(val => console.log(val));
There's nothing inherently wrong with the Q implementation here, but it's non-standard and this may make our code less readable to other developers or prevent us from learning standard best practices.
Bluebird is more similar to the native promise. In fact, it could be used as a polyfill.
TypeScript will transpile code, but it will not polyfill native language features that are not present in your target environment. This is critical to understand to avoid frustration and mysterious bugs. What TypeScript will do for us is allow us to specify the target environment. Let's look at a simple example.
Consider the following tsconfig.json file:
{
"compilerOptions": {
"target": "es6",
"module": "commonjs",
"outDir": "./public",
"strict": true,
"esModuleInterop": true,
"forceConsistentCasingInFileNames": true
}
}
Now consider this module in promise.ts:
const p = new Promise<number>((resolve, reject) => {
resolve(123);
});
p.then(val => console.log(val));
Our code will transpile fine. We enter npx tsc and the transpiled JavaScript output looks very much like our TypeScript code. The only difference is the type has been removed:
const p = new Promise((resolve, reject) => {
resolve(123);
});
p.then(val => console.log(val));
However, consider if we change the target to es5:
{
"compilerOptions": {
"target": "es5",
"module": "commonjs",
"outDir": "./public",
"strict": true,
"esModuleInterop": true,
"forceConsistentCasingInFileNames": true
}
}
Now the project will no longer build:
% npx tsc
src/promise.ts:1:15 - error TS2585: 'Promise' only refers to a type, but is being used as a value here. Do you need to change your target library? Try changing the `lib` compiler option to es2015 or later.
1 const p = new Promise<number>((resolve, reject) => {
~~~~~~~
Found 1 error.
TypeScript even warns me that I might want to fix my target. Note that "es2015" and "es6" are the same thing (as are "es2016" and "es7", and so on). This is a somewhat confusing convention that we simply need to get used to.
This will be fine if I can build my project for an es6+ environment (such as a current version of Node.js or any modern browser), but if I need to support a legacy browser or a very old version of Node.js, then "fixing" this by setting the compilation target higher will only result in a broken application. We'll need to use a polyfill.
In this case, Bluebird can be a really good choice as it has an API very similar to native promises. In fact, all I will need to do is npm install bluebird and then import the library into my module. The Bluebird library does not include typings so to have full IDE support, you'd need to also install @types/bluebird as a devDependency:
import { Promise } from "bluebird";
const p = new Promise<number>(resolve => {
resolve(123);
});
p.then(val => console.log(val));
My transpiled code will now run in a very early version of Node.js, such as version 0.10 (released in 2013).
Note that Bluebird is designed to be a full-featured Promise library. If I'm just looking for a polyfill, I might prefer to use something like es6-promise. Its use is exactly the same. I npm install es6-promise and then import the Promise class into my module:
import { Promise } from "es6-promise";
const p = new Promise<number>(resolve => {
resolve(123);
});
p.then(val => console.log(val));
If you want to try this yourself, be aware that modern versions of TypeScript won't even run on Node.js 0.10! You'll have to transpile your code in a recent version (such as Node.js 12) and then switch to Node.js 0.10 to execute the code. To do this, it's a good idea to use a version manager such as nvm or n.
This is actually a great example of the power of TypeScript. We can write and build our code on a modern version but target a legacy runtime. Setting the compilation target will make sure we build code that is suitable for that runtime.
Promisification is the practice of taking an asynchronous function that expects a callback and turning it into a promise. This is essentially a convenience utility that allows you to always write in promises instead of having to use the callbacks of a legacy API. It can be really helpful to promisify legacy APIs so that all our code can use promises uniformly and be easy to read. But it's more than just a convenience to convert callbacks into promises. Some modern APIs will only accept promises as parameters. If we could only work on some code with callbacks, we would have to wrap the callback asynchronous code with promises manually. Promisification saves us the trouble and potentially many lines of code.
Let's work through an example of promisifying a function that expects a callback. We have a few options to choose from. Bluebird again provides this functionality with Promise.promisify. This time, we'll try a polyfill, es6-promisify. Let's start with a function that expects a callback:
const asyncAdder = (n1: number, n2: number, cb: Function) => {
let err: Error;
if (n1 === n2) {
cb(Error("Use doubler instead!"));
} else {
cb(null, n1 + n2);
}
};
asyncAdder(3, 4, (err: Error, sum: number) => {
if (err) {
throw err;
}
console.log(sum);
});
Functions that can be promisified follow a convention where the first argument into the callback is an error object. If the error is null or undefined, then the function is considered to have been invoked successfully. Here, I am calling asyncAdder, giving it two numbers and a callback function. My callback understands that asyncAdder will have an error in the first argument position if an error was thrown or the sum of the two numbers in the second argument position if it was successful. By adhering to this pattern, the function can be promisified. First, we npm install es6-promisify and then we import the module:
import { promisify } from "es6-promisify";
const asyncAdder = (n1: number, n2: number, cb: Function) => {
let err: Error;
if (n1 === n2) {
cb(Error("Use doubler instead!"));
} else {
cb(null, n1 + n2);
}
};
const promiseAdder = promisify(asyncAdder);
promiseAdder(3, 4)
.then((val: number) => console.log(val))
.catch((err: Error) => console.log(err));
We use the promisify import to wrap our function and now we can work exclusively with promises.
Bluebird gives us exactly the same functionality:
import { promisify } from "bluebird";
const asyncAdder = (n1: number, n2: number, cb: Function) => {
if (n1 === n2) {
cb(Error("Use doubler instead!"));
} else {
cb(null, n1 + n2);
}
};
const promiseAdder = promisify(asyncAdder);
promiseAdder(3, 4)
.then((val: number) => console.log(val))
.catch((err: Error) => console.log(err));
Node.js introduced its own version of promisify as a native feature in version 8 (2017). Instead of using es6-promise or Bluebird, if we are targeting a Node.js 8+ environment, we can leverage the util package. Note that since we are writing TypeScript, we will need to add the @types/node dependency to take advantage of this package. Otherwise, TypeScript will not understand our import. We'll run npm install -D @types/node. The -D flag will install the type as a devDependency, which means it can be excluded from production builds:
import { promisify } from "util";
const asyncAdder = (n1: number, n2: number, cb: Function) => {
let err: Error;
if (n1 === n2) {
cb(Error("Use doubler instead!"));
} else {
cb(null, n1 + n2);
}
};
const promiseAdder = promisify(asyncAdder);
promiseAdder(3, 4)
.then((val: number) => console.log(val))
.catch((err: Error) => console.log(err));
Obviously, if we want our code to run in a browser, this won't work and we should use one of the other libraries, such as Bluebird, to enable this functionality.
As of Node.js 10 (released 2018), the FileSystem API (fs) comes with promisified async versions of all the functions as well as blocking synchronous versions of them. Let's look at the same operation with all three alternatives.
Many Node.js developers have worked with this API. This method will read a file, taking the file path as the first argument and a callback as the second argument. The callback will receive one or two arguments, an error (should one occur) as the first argument and a data buffer object as the second argument, should the read be successful:
import { readFile } from "fs";
import { resolve } from "path";
const filePath = resolve(__dirname, "text.txt");
readFile(filePath, (err, data) => {
if (err) {
throw err;
}
console.log(data.toString());
});
We read the file and log out the contents asynchronously. Anyone who has worked with the Node.js fs library in the past has probably seen code that looks like this. The code is non-blocking, which means even if the file is very large and the read is very slow, it won't prevent the application from performing other operations in the meantime. There's nothing wrong with this code other than it's not as concise and modern as we might like.
In the example above, we're reading the file and logging to the console – not very useful, but in a real-world scenario, we might be reading a config file on startup, handling the documents of clients, or managing the lifecycle of web assets. There are many reasons you might need to access the local filesystem in a Node.js application.
The fs library also exposes a fully synchronous API, meaning its operations are blocking and the event loop won't progress until these operations are complete. Such blocking operations are more often used with command-line utilities where taking full advantage of the event loop isn't a priority and instead, simple, clean code is the priority. With this API, we can write some nice, concise code like this:
import { readFileSync } from "fs";
import { resolve } from "path";
const filePath = resolve(__dirname, "text.txt");
console.log(readFileSync(filePath).toString());
It could be tempting to write code like this and call it a day, but readFileSync is a blocking operation so we must beware. The main execution thread will actually be paused until this work is complete. This may still be appropriate for a command-line utility, but it could be a real disaster to put code like this in a web API.
The fs library exposes the promises API, which can give us the best of both worlds, asynchronous execution and concise code:
import { promises } from "fs";
import { resolve } from "path";
const filePath = resolve(__dirname, "text.txt");
promises.readFile(filePath).then(file => console.log(file.toString()));
Using the promises API lets us write nearly as concise code as the synchronous version, but now we are fully asynchronous, making the code suitable for a high-throughput web application or any other process where a blocking operation would be unacceptable.
In this exercise, you will use the fs promises API to concatenate two files into one. Whenever possible, make your code DRY (don't repeat yourself) by using functions. You'll need to use readFile and writeFile. The only dependencies needed for this program are ts-node (for execution), typescript, and @types/node so we have the types for the built-in fs and path libraries in Node.js:
Note
The code file for this exercise can also be found here: https://packt.link/M3MH3.
import { readFileSync, writeFileSync } from "fs";
import { resolve } from "path";
const file1 = readFileSync(resolve(__dirname, 'file1.txt'));
const file2 = readFileSync(resolve(__dirname, 'file2.txt'));
writeFileSync(resolve(__dirname, 'output.txt'), [file1, file2].join(' '));
The resolve function from the path library resolves paths on your filesystem and is often used alongside the fs library, as depicted above. Both these libraries are part of the Node.js standard library so we need only install typings, not the libraries themselves.
Text in file 1.
Text in file 2.
So this works without promises. And this is probably fine for a command-line utility executed by a single user on a single workstation. However, if this kind of code were put into a web server, we might start to see some blocking issues. Synchronous filesystem calls are blocking and block the event loop. Doing this in a production application can cause latency or failure.
import { readFile, writeFile } from 'fs';
import { resolve } from 'path';
readFile(resolve(__dirname, 'file1.txt'), (err, file1) => {
if (err) throw err;
readFile(resolve(__dirname, 'file1.txt'), (err, file2) => {
if (err) throw err;
writeFile(
resolve(__dirname, 'output.txt'),
[file1, file2].join(' '),
(err) => {
if (err) throw err;
}
);
});
});
We are now clear of blocking issues, but the code is looking quite ugly. It's not hard to imagine another developer failing to understand the intent of this code and introducing a bug. Additionally, by putting the second readFile as a callback in the first, we are making the function slower than it needs to be. In a perfect world, those calls can be made in parallel. To do that, we can leverage the promises API.
import { promises } from 'fs';
import { resolve } from 'path';
Promise.all([
promises.readFile(resolve(__dirname, 'file1.txt')),
promises.readFile(resolve(__dirname, 'file2.txt')),
]);
import { promises } from 'fs';
import { resolve } from 'path';
Promise.all([
promises.readFile(resolve(__dirname, 'file1.txt')),
promises.readFile(resolve(__dirname, 'file2.txt')),
]).then((files) => {
promises.writeFile(resolve(__dirname, 'output.txt'), files.join(' '));
});
Text in file 1.
Text in file 2.
Now that we have this working, we can certainly imagine much more complicated programs manipulating other types of files, such as a PDF merge function as a web service. Though some of the internals would be a lot more challenging to implement, the principles would be the same.
It is very common for Node.js applications to work with a backend database such as mysql or postgres. It is critical that queries against a database be made asynchronously. Production-grade Node.js web services may serve thousands of requests per second. If it were necessary to pause the main execution thread for queries made synchronously against a database, these services just wouldn't scale at all. Asynchronous execution is critical to making this work.
The process of negotiating a database connection, sending a SQL string, and parsing the response is complicated and not a native feature of Node.js and so we will almost always use a third-party library to manage this. These libraries are guaranteed to implement some kind of callback or promise pattern and we'll see it throughout their documentation and examples. Depending on the library you choose, you may have to implement a callback pattern, you may get to work with promises, or you may be presented with async/await (see Chapter 13 Async/Await). You may even get a choice of any of these as it's definitely possible to provide all of the above as options.
For these examples, we'll use sqlite. Now, sqlite is a nice library that implements a fairly standard SQL syntax and can operate against a static file as a database or even run in memory. We will use the in-memory option. This means that there is nothing that needs to be done to set up our database. But we will have to run a few scripts to create a table or two and populate it on startup. It would be fairly simple to adapt these exercises to work with mysql, postgres, or even mongodb. All of these databases can be installed on your workstation or run in a Docker container for local development.
For the first example, let's look at sqlite3. This library has an asynchronous API. Unlike more permanent and robust databases such as mysql or postgres, some sqlite client libraries are actually synchronous, but we won't be looking at those as they aren't very useful for demonstrating how promises work. So sqlite3 implements an asynchronous API, but it works entirely with callbacks. Here is an example of creating an in-memory database, adding a table, adding a row to that table, and then querying back the row we added:
import { Database } from "sqlite3";
const db = new Database(":memory:", err => {
if (err) {
console.error(err);
return db.close();
}
db.run("CREATE TABLE promise (id int, desc char);", err => {
if (err) {
console.error(err);
return db.close();
}
db.run(
"INSERT INTO promise VALUES (1, 'I will always lint my code.');",
() => {
db.all("SELECT * FROM promise;", (err, rows) => {
if (err) {
console.error(err);
return db.close();
}
console.log(rows);
db.close(err => {
if (err) {
return console.error(err);
}
});
});
}
);
});
});
This is exactly what developers mean when they complain about "callback hell." Again, this code executes perfectly well, but it is needlessly verbose, becomes deeply nested, and repeats itself, especially in the error-handling department. Of course, the code could be improved by adding abstractions and chaining together methods, but that doesn't change the fact that callbacks aren't a very modern way to think about writing Node.js code.
Since all of these callbacks follow the pattern of expecting the first argument to be an error object, we could promisify sqlite3, but as is often the case, somebody has already done this work for us and provided a library called simply sqlite that mimics the exact API of sqlite3, but implements a promise API.
I can rewrite the same code using this library and the result is a good deal more pleasing:
import { open } from "sqlite";
import * as sqlite from "sqlite3";
open({ driver: sqlite.Database, filename: ":memory:" }).then((db) => { return db
.run("CREATE TABLE promise (id int, desc char);")
.then(() => {
return db.run(
"INSERT INTO promise VALUES (1, 'I will always lint my code.');"
);
})
.then(() => {
return db.all("SELECT * FROM promise;");
})
.then(rows => {
console.log(rows);
})
.catch(err => console.error(err))
.finally(() => db.close());
});
We've dropped nearly half of the lines of code and it's not nested as deeply. This still could be improved, but it's much cleaner now. Best of all, we have a single catch block followed by finally, to make sure the database connection is closed at the end.
In the next exercise, we'll build a RESTful API. REST is a very common standard for web traffic. Most websites and web APIs operate using REST. It stands for Representational State Transfer and defines concepts such as operations (sometimes called "methods" or even "verbs") such as GET, DELETE, POST, PUT, and PATCH and resources (the "path" or "noun"). The full scope of REST is beyond this book.
Developers working on RESTful APIs frequently find it useful to work with some sort of REST client. The REST client can be configured to make different kinds of requests and display the responses. Requests can be saved and run again in the future. Some REST clients allow the creation of scenarios or test suites.
Postman is a popular and free REST client. If you don't already have a REST client you're comfortable working with, try downloading Postman at https://www.postman.com/downloads/ before the next exercise. Once you've installed Postman, check its documentation (https://learning.postman.com/docs/getting-started/sending-the-first-request/) and get ready for the next exercise.
In this exercise, you will create a REST API backed by sqlite. In this project, you will implement all CRUD (create, read, update, and delete) operations in the sqlite database and we will expose the corresponding REST verbs (POST, GET, PUT, and DELETE) from our web server:
Note
The code file for this exercise can also be found here: https://packt.link/rlX7G.
npm i
This will install typings for Node.js, as well as ts-node and typescript as development dependencies while sqlite and sqlite3 are regular dependencies. All of these dependencies are already specified in the project's package.json file. Some of the dependencies, such as @types/node, ts-node, and typescript, are specified as devDependencies and others are regular dependencies. For the purpose of this exercise, the distinction is not going to matter but it's a common practice to run application builds so that only the necessary dependencies are part of the production build, thus the separation. The way to run this kind of build is npm install --production if you only wish to install the production dependencies or npm prune --production if you've already installed your devDependencies and wish to remove them.
import { Database } from "sqlite";
import sqlite from "sqlite3";
export interface PromiseModel {
id: number;
desc: string;
}
export class PromiseDB {
private db: Database;
private initialized = false;
constructor() {
this.db = new Database({
driver: sqlite.Database,
filename: ":memory:",
});
}
}
It's always a good idea to create a class or interface to describe our entity, so here we have created PromiseModel. It will be useful to other parts of our application to be able to understand the properties our entity has as well as their types, since the database will only return untyped query results. We export the interface so that it can be used by other modules.
initialize = () => {
if (this.initialized) {
return Promise.resolve(true);
}
return this.db
.open()
.then(() =>
this.db
.run("CREATE TABLE promise (id INTEGER PRIMARY KEY, desc CHAR);")
.then(() => (this.initialized = true))
);
};
First, we check to see if we've already initialized the database. If so, we're done and we resolve the promise. If not, we call open, then once that promise has resolved, run our table creation SQL, and then finally update the state of the database so that we don't accidentally re-initialize it.
We could try to initialize the database in the constructor. The problem with that approach is that constructors do not resolve promises before returning. Constructor functions may call methods that return promises, but they will not resolve the promise. It's usually cleaner to create the singleton object and then invoke the initialization promise separately. For more information about singleton classes, see Chapter 8, Dependency Injection in TypeScript.
create = (payload: PromiseModel) =>
this.db.run("INSERT INTO promise (desc) VALUES (?);", payload.desc);
This method takes an object of type PromiseModel as an argument, sends a prepared statement (a parameterized SQL statement that is safe from SQL injection attacks), and then returns RunResult, which contains some metadata about the operation that took place. Since the sqlite library ships with typings, we're able to infer the return type without needing to specify it. The return type in this case is Promise<ISqlite.RunResult<sqlite.Statement>>. We could paste all of that into our code, but it's much cleaner the way it is. Remember, if a good type can be inferred, it's best to just let TypeScript do the heavy lifting.
delete = (id: number) => this.db.run("DELETE FROM promise WHERE id = ?", id);
getAll = () => this.db.all<PromiseModel[]>("SELECT * FROM promise;");
getOne = (id: number) =>
this.db.get<PromiseModel>("SELECT * FROM promise WHERE id = ?", id);
These methods use type parameters to specify the expected return types. If the type parameters were omitted, these methods would return any types, which wouldn't be very helpful to the other parts of our application.
update = (payload: PromiseModel) =>
this.db.run(
"UPDATE promise SET desc = ? where id = ?",
payload.desc,
payload.id
);
import { Database } from "sqlite";
import sqlite from "sqlite3";
export interface PromiseModel {
id: number;
desc: string;
}
export class PromiseDB {
private db: Database;
private initialized = false;
constructor() {
this.db = new Database({
driver: sqlite.Database,
filename: ":memory:",
});
}
initialize = () => {
if (this.initialized) {
return Promise.resolve(true);
}
return this.db
.open()
.then(() =>
this.db
.run("CREATE TABLE promise (id INTEGER PRIMARY KEY, desc CHAR);")
.then(() => (this.initialized = true))
);
};
create = (payload: PromiseModel) =>
this.db.run("INSERT INTO promise (desc) VALUES (?);", payload.desc);
delete = (id: number) => this.db.run("DELETE FROM promise WHERE id = ?", id);
getAll = () => this.db.all<PromiseModel[]>("SELECT * FROM promise;");
getOne = (id: number) =>
this.db.get<PromiseModel>("SELECT * FROM promise WHERE id = ?", id);
update = (payload: PromiseModel) =>
this.db.run(
"UPDATE promise SET desc = ? where id = ?",
payload.desc,
payload.id
);
}
The next step is to build an HTTP server implementing a RESTful interface. Many Node.js developers use frameworks such as Express.js, Fastify, or NestJS, but for this exercise, we're just going to build a basic HTTP server. It won't have all the niceties of those frameworks, but it'll help us focus on asynchronous programming.
import { createServer, IncomingMessage, Server, ServerResponse } from "http";
import { PromiseDB } from "./db";
class App {
public db: PromiseDB;
private server: Server;
constructor(private port: number) {
this.db = new PromiseDB();
this.server = createServer(this.requestHandler);
}
}
export const app = new App(3000);
initialize = () => {
return Promise.all([
this.db.initialize(),
new Promise((resolve) => this.server.listen(this.port, () => resolve(true))),
]).then(() => console.log("Application is ready!"));
};
This method uses Promise.all so that we can initialize our database and server in parallel. When both are ready, it'll log a message letting us know the application is ready to handle requests. We are calling the initialize method on the PromiseDB instance that we've exposed to our App class. Unfortunately, server.listen doesn't return a promise but instead implements a fairly primitive API that requires a callback so we are wrapping it in our own promise. It's tempting to want to wrap server.listen in util.promisify, but even that won't work because util.promisify expects the callback function to expect the first argument to be an error object and the server.listen callback doesn't take any arguments. Sometimes, despite our best efforts, we just have to use a callback, but we can usually wrap them with promises.
requestHandler = (req: IncomingMessage, res: ServerResponse) => {
res.setHeader("Access-Control-Allow-Origin", "*");
res.setHeader("Access-Control-Allow-Headers", "*");
res.setHeader(
"Access-Control-Allow-Methods",
"DELETE, GET, OPTIONS, POST, PUT"
);
if (req.method === "OPTIONS") {
return res.end();
}
const urlParts = req.url?.split("/") ?? "/";
switch (urlParts[1]) {
case "promise":
return promiseRouter(req, res);
default:
return this.handleError(res, 404, "Not Found.");
}
};
We want our application to direct all traffic on the /promise resource to our promises API. This will allow us to add more resources (maybe /admin or /users) later on. The request handler's job is to see if we have requested the /promise route and then direct traffic to that specific router. Since we haven't defined any other resources, we'll return a 404 if we request any other route.
Note that we are handling the OPTIONS HTTP verb differently than any other. If we get a request with that verb, we set the "Access-Control-Allow-Origin" header and return a successful response. This is for development convenience. The topic of CORS is beyond the scope of this book, and readers are encouraged to learn more about it before implementing it in a production environment.
handleError = (
res: ServerResponse,
statusCode = 500,
message = "Internal Server Error."
) => res.writeHead(statusCode).end(message);
This is a nice one-liner that by default will throw a 500 status code Internal Server Error, but can take optional parameters to return any error code or message. Our default handler sets the status code to 404 and provides the message "Not Found".
import { createServer, IncomingMessage, Server, ServerResponse } from "http";
import { PromiseDB } from "./db";
import { promiseRouter } from "./router";
class App {
public db: PromiseDB;
private server: Server;
constructor(private port: number) {
this.db = new PromiseDB();
this.server = createServer(this.requestHandler);
}
initialize = () => {
return Promise.all([
this.db.initialize(),
new Promise((resolve) => this.server.listen(this.port, () => resolve(true))),
]).then(() => console.log("Application is ready!"));
};
handleError = (
res: ServerResponse,
statusCode = 500,
message = "Internal Server Error."
) => res.writeHead(statusCode).end(message);
requestHandler = (req: IncomingMessage, res: ServerResponse) => {
res.setHeader("Access-Control-Allow-Origin", "*");
res.setHeader("Access-Control-Allow-Headers", "*");
res.setHeader(
"Access-Control-Allow-Methods",
"DELETE, GET, OPTIONS, POST, PUT"
);
if (req.method === "OPTIONS") {
return res.end();
}
const urlParts = req.url?.split("/") ?? "/";
switch (urlParts[1]) {
case "promise":
return promiseRouter(req, res);
default:
return this.handleError(res, 404, "Not Found.");
}
};
}
export const app = new App(3000);
app.initialize();
If you've implemented all this in code, you're probably still getting an error on promiseRouter. That's because we haven't written that yet.
Unlike our database and server modules, the router is stateless. It does not need to be initialized and does not track any variables. We could still create a class for our router, but let's instead use a functional programming style. There's really no right or wrong way to do this. Instead of using classes for our database and server, we could likewise use a functional style.
We're going to work on creating several handlers, tie them together with a router based on HTTP verbs, and also create a body parser. Let's start with the body parser.
const parseBody = (req: IncomingMessage): Promise<PromiseModel> => {
return new Promise((resolve, reject) => {
let body = "";
req.on("data", (chunk) => (body += chunk));
req.on("end", () => {
try {
resolve(JSON.parse(body));
} catch (e) {
reject(e);
}
});
});
};
The data stream is again a fairly low-level API that we must wrap in a promise. The stream is event-based, as are a lot of the Node.js APIs. In this case, we are listening for two separate events, data and end. Each time we get a data event, we add data to the body string. When we receive the end event, we can finally resolve our promise. Since the data is a string at this point and we want an object, we will use JSON.parse to parse the object. JSON.parse must be wrapped with try/catch to catch any parsing errors.
By default, JSON.parse returns an any type. This type is too broad to be of any use in checking our application for type correctness. Fortunately, we can add proper type checking by setting the return type of parseBody to Promise<PromiseModel>. This will narrow the type of the object returned by JSON.parse to PromiseModel and the rest of our application can expect that type to have been parsed. Note that this is a compile-time check and does not guarantee the correct data has come from a third-party source such as an end user. It is advisable to combine type checks with validators or type guards to ensure consistency. When in doubt, employ good error handling.
const handleCreate = (req: IncomingMessage, res: ServerResponse) =>
parseBody(req)
.then((body) => app.db.create(body).then(() => res.end()))
.catch((err) => app.handleError(res, 500, err.message));
const handleDelete = (requestParam: number, res: ServerResponse) =>
app.db
.delete(requestParam)
.then(() => res.end())
.catch((err) => app.handleError(res, 500, err.message));
The HTTP DELETE verb does not use a body. Instead, we will take the ID of the row we want to delete from the URL. We'll see how that routing works in a moment.
const handleGetAll = (res: ServerResponse) =>
app.db
.getAll()
.then((data) => res.end(JSON.stringify(data)))
.catch((err) => app.handleError(res, 500, err.message));
const handleGetOne = (requestParam: number, res: ServerResponse) =>
app.db
.getOne(requestParam)
.then((data) => res.end(JSON.stringify(data)))
.catch((err) => app.handleError(res, 500, err.message));
const handleUpdate = (req: IncomingMessage, res: ServerResponse) =>
parseBody(req)
.then((body) => app.db.update(body).then(() => res.end()))
.catch((err) => app.handleError(res, 500, err.message));
export const promiseRouter = (req: IncomingMessage, res: ServerResponse) => {
const urlParts = req.url?.split("/") ?? "/";
const requestParam = urlParts[2];
res.setHeader("Content-Type", "application/json");
switch (req.method) {
case "DELETE":
if (requestParam) {
return handleDelete(Number.parseInt(requestParam), res);
}
case "GET":
if (requestParam) {
return handleGetOne(Number.parseInt(requestParam), res);
}
return handleGetAll(res);
case "POST":
return handleCreate(req, res);
case "PUT":
return handleUpdate(req, res);
default:
app.handleError(res, 404, "Not Found.");
}
};
npx ts-node app.ts
You should see the following on your console:
Application is ready!
This implies that your application is ready to start receiving requests. If not, you may have a typo somewhere. Let's try it out. You can either use a REST client or curl. This exercise uses Postman
Figure 12.3: Initial GET request
This is because we haven't created any records yet.
Figure 12.4: POST data
Figure 12.5: Use GET to retrieve data
Figure 12.6: Single object
Figure 12.7: No items found
Figure 12.8: 404 response
Looks like things are working. Experiment with the different HTTP verbs. Try giving invalid input and see how the error handling works. We'll use this API in the next section.
We've learned techniques for using promises in web projects as well as Node.js APIs. Let's combine our earlier exercises to build a web application that renders progressively as data is ready and makes use of asynchronous programming on the server to avoid blocking the event loop.
In this activity, we're going to build a web application that talks to the API we just built. Although frameworks such as Angular, React, and Vue are very popular, those are covered in later chapters so we will build a very basic TypeScript application with no bells or whistles.
Note
This activity provides a UI application that communicates with the backend API we built in Exercise 12.06, Implementing a RESTful API backed by sqlite. In order to get the output shown, you will need to have your API running. Return to that exercise for help if you need it.
This UI application will connect to our API and allow us to modify the data we store in our database. We will be able to list out the data we've saved (the promises we make), create new items to save, and delete items. Our UI application will need to make GET, POST, and DELETE calls to our backend API. It will need to use an HTTP client to do that. We could install a library such as axios to handle that or we could use the native Fetch API available in all modern web browsers.
Our web application will also need to be able to dynamically update the UI. Modern view libraries such as react or vue do that for us, but in this case we are framework-free so we'll need to use more DOM (document object model) APIs such as getElementById, createElement, and appendChild. These are natively available in all web browsers with no libraries needed.
Implementing this application using promises will be critical because all of the API calls will be asynchronous. We will perform an action, such as a click, our application will call the API, then it will respond with data and then and only then will the promise resolve and cause a change in the DOM state.
Here are some high-level steps that will enable you to create the app:
Note
The code file for this activity can be found here: https://packt.link/RlYli.
Once you have completed the activity, you should be able to view the form on localhost:8080. An example is shown here:
Figure 12.9: Completed form
Note
The solution to this activity can be found via this link.
We have learned how promises came to be a part of the ECMAScript standard, taken a tour of the native implementation, and worked through sample projects using promises to solve real-world problems. We also explored how TypeScript can enhance the promise spec and how we can polyfill promises when targeting environments that don't include native promise support. We contrasted the Bluebird promise library with native promises. We learned about different ways of interacting with the filesystem using Node.js and we also covered managing asynchronous database connections and queries. In the end, we put all of this together into a working application.
In the next chapter, we will build upon the asynchronous programming paradigm by covering async and await. We'll discuss when to use these over promises and the place promises still have in the TypeScript ecosystem.