Rather than doing a deep dive into any specific topic, this chapter covers a range of topics that will make your app perform better and improve your end-user experience. This includes React performance, Node.js performance, and various caching strategies. The last section of this chapter covers handling cookies in an isomorphic application and the trade-offs that this creates with some of the caching strategies.
This chapter continues to use the complete-isomorphic-example repo on GitHub. It can be found at http://mng.bz/8gV8. The first section uses the code on branch chapter-11.1 (git checkout chapter-11.1). You can find the completed code for the chapter on the chapter-11.complete branch (git checkout chapter-11-complete).
To run each branch, make sure to use the following:
$ npm install $ npm start
As I’ve spent more time with React, I’ve discovered that although it’s fast out of the box, in complex apps you’ll run into performance problems. To keep your React web app performant, you need to keep performance in mind as your app grows and adds more-complex feature interactions. Two specific cases start to cause performance issues as your app grows:
I’ve often experienced the following scenario when building an app of any kind: my app starts out small, and the JavaScript assets are small enough that they load quickly. Over time, I add features, get lazy about reviewing the size of included packages, and generally don’t pay attention to bundle size. (Code base size management is especially difficult on larger teams.) Then one day I check the load time of the page and realize my JavaScript file has become too big! It’s affecting the overall load time of the app. Cue freak-out moment!
Thankfully, webpack provides a way to solve this problem, by breaking the code into multiple bundles that can be loaded as they’re needed. The next two diagrams walk you through this concept.
If you’re using React Router 4, check out appendix C for information on code splitting with webpack.
Figure 11.1 demonstrates how the app is currently compiling. All the code is being pulled together into a single output file. This file is being referenced in html.jsx.
Figure 11.2 shows what you’ll implement in this section. The code is still combined in webpack’s compilation step, but it’s then split into multiple JavaScript files. This is configured by you in code (not in webpack configuration).
To make this happen in your code, you need to update the way you import the routes. This process has three steps:
1. Add Babel plugins that will handle dynamic import on the Node.js server and via the Babel loader in your webpack configuration.
2. Add dynamic import to sharedRoutes.
3. Enable chunks in webpack with chunkFilename.
Installing and adding the new Babel plugins is the first step. Run the following commands in your terminal:
$ npm install --save-dev babel-plugin-syntax-dynamic-import $ npm install --save-dev babel-plugin-dynamic-import-node
Then you need to update .babelrc. Listing 11.1 shows the updates that are needed. This is a significant change from the old version because you need different plugins for webpack and Node.js. Later you’ll make sure that webpack points at the webpack env config.
{ "presets": ["es2015", "react"], "env": { 1 "webpack-env": { 2 "plugins": [ "syntax-dynamic-import" 3 ] }, "development": { 2 "plugins": [ "syntax-dynamic-import", 4 "dynamic-import-node" 4 ] } }, "plugins": [ 5 "transform-es2015-destructuring", "transform-es2015-parameters", "transform-object-rest-spread" ] }
The main goal of the Babel config change is that it splits your Babel config into two versions, one for the server (development) and one for webpack (webpack-env).
Next, you have to add a dynamic import. Listing 11.2 shows how to create statements that tell webpack to create a code chunk. This replaces the import statements for components in sharedRoutes. For now, you’ll apply this pattern to a single route: cart. But in production apps, I recommend you apply this based on your own traffic patterns (chunk your highest traffic routes separately from your low-traffic routes, or chunk admin or other authenticated pages separately from public ones). Additionally, this code could be abstracted and made reusable for a production use case—but for this example, it illustrates the changes in a clear, concise way.
// remove import Cart from '../components/cart'; 1 <Route path="/" component={App} onChange={onChange}> <IndexRoute component={Products} /> <Route path="cart" getComponent={(location, cb) => { 2 import( 3 /* webpackChunkName: "cart" */ 4 /* webpackMode: "lazy" */ 4 './../components/cart') 3 .then((module) => { 5 cb(null, module.default); 5 onChange(null, { 6 routes: [ {component: module.default} ] }); }) .catch(error => 7 console.log('An error occurred while loading the component', error) ); }} />
You’ll notice that you both replaced the default import with a dynamic load and moved that dynamic load into React Router’s getComponent property. The code will be lazy loaded (webpackMode: lazy); it won’t be loaded until the user navigates to this route. This is advantageous because it prevents unnecessary loading of code for features that the user hasn’t yet accessed.
Finally, it’s useful to name your webpack chunks. This is configurable in the webpack configuration file. The following listing shows you how to add this property to your webpack config file.
module.exports = { // ... other config options output: { path: __dirname + '/src/', filename: "browser.js", chunkFilename: "browser-[name].js" 1 }, module: {} };
In the next section, we’ll look at one way to improve React performance in situations where React’s base performance isn’t enough.
As your app grows, you’ll run into situations where your components are running through their render cycles unnecessarily (I’ve seen situations where re-renders get into the tens of seconds). If this is happening and causing a measurable impact on your application, use shouldComponentRender to limit the number of renders.
Before making performance improvements, you should always profile your application. Record the performance metric you’re measuring in the current version of your app. Then make any performance updates. Finally, measure the same performance metric to confirm that your changes had a positive impact.
To get started profiling web apps, you should become an expert on the Chrome DevTools performance panels: http://mng.bz/a9wf.
The best way to implement shouldComponentRender without causing yourself later pain and headaches is to make sure your properties are being created with immutable patterns. In this application, this is already being handled in the Redux reducers. By creating immutable objects, the object references change, and a shallow comparison becomes enough to check whether two objects are different from one another. Listing 11.4 shows you how to do this in the context of a Detail Page component. Add the code from the listing to detail.jsx.
Using shouldComponentRender can send you down a rabbit hole of despair. Use it sparingly and use it wisely. (You can end up with complex, large functions that are calculating whether to render. That’s bad and should be avoided.)
componentDidMount() {} shouldComponentUpdate(nextProps) { if (this.props.name === nextProps.name && 1 this.props.description === nextProps.description && this.props.details === nextProps.details && this.props.price === nextProps.price && this.props.thumbnail === nextProps.thumbnail) { return false; 2 } return true; 3 } componentDidUpdate() {}
This will work for many situations but it’s not without gotchas. One problem is this implementation requires you to write many checks based on the implementation details of the properties. You could abstract the concept of this code—comparing each property on this.props/nextProps—into a function that can be reused for many components. The online article “Performance Optimisations for React Applications” by Alex Reardon (http://mng.bz/QJk3) covers using shouldComponentUpdate with additional detail. It includes a sample implementation of an abstracted deep equals function that does only reference checks. Check out the code on GitHub (http://mng.bz/q7yU).
Finally, if you need shouldComponentUpdate to only do a shallow comparison on both the props object and the state object, you can use React Pure Component. This is a class provided by React. You can find the docs at https://reactjs.org/docs/react-api.html#react.purecomponent.
Unfortunately, a deeper dive into React performance optimizations is outside the context of this book. Fortunately, many other great resources are available on this topic! Here are some that will take you deeper:
In the next section, we’ll look at server-side performance improvements that can be used to have a positive impact on your Node.js application.
In an isomorphic app, your server’s performance is just as important as the browser performance. When we first started working with React at work, it greatly simplified building our pages for search bots. But we quickly realized that React’s server render was slower than we’d have liked. Creating a fully rendered string output for many components takes time and is a process-blocking task on the server. This limits the number of requests per second that your server is able to handle.
In the rest of this section, and in the caching section, I’ll discuss strategies you can use to improve your server performance times:
The first thing you’ll add is the ability for your page response to be streamed to the browser. If you’re following along and want to switch to this section’s GitHub branch, check out chapter-11.2 (git checkout chapter-11.2).
If your main goal is to improve time to first byte and allow the DOM to start processing as soon as possible, streaming your rendered page response can be a good solution. Node.js streams are a way to represent large amounts of data and deliver it over time. Rather than wait for an entire HTML page to download, the page can be delivered in chunks over time (more info on streams at http://mng.bz/s91m).
By turning the server response into a stream, you improve the speed at which the browser can begin downloading and displaying the HTML. Listing 11.5 shows how to use the react-dom-stream library to render streams instead of strings. Add the code to renderView.jsx. You also need to run the following command before this code will work:
$ npm i --save react-dom-stream
You can find more information about this package at https://github.com/aickin/react-dom-stream. Note that it hasn’t been fully upgraded to work with the latest React, so you may not want to use it in production, but it illustrates the streaming concept well.
import React from 'react'; import { renderToString } from 'react-dom-stream/server'; 1 import { Provider } from 'react-redux'; const streamApp = renderToString( 2 <Provider store={store}> <RouterContext routes={routes} {...renderProps} /> </Provider> ); const streamHTML = renderToString( <HTML html={streamApp} 3 serverState={stringifiedServerState} metatags={seoTags} title={title} /> ); streamHTML.pipe(res, { end: false }); 4 streamHTML.on('end', () => { 5 res.end(); 6 });
In addition to using React, we also use GraphQL at work. This enables us to gather data from many microservices. More importantly, it also allows us to request the data we need for our views rather than use REST APIs with predetermined responses. Think of it as a front end for your back-end REST services. You can learn more about GraphQL at http://graphql.org.
This is a powerful setup, but GraphQL makes a lot of network calls. We ran into an issue of network calls that were timing out. The services we were talking to didn’t show any time-outs; they showed fast response times. After much investigation, the team figured out that we were making so many requests that the call stack was causing some of the requests to time out. The call stack became a block, and requests were timing out before they had a chance to receive any response.
This can also happen in your React isomorphic app. If a page of your app makes a lot of network calls for a specific view, you might run into this slow network request problem. One strategy to fix this is to enable connection pooling on your Node.js server.
The solution to this problem in Node.js is to create a permanent connection pool to reduce the cost of opening connections. A connection pool guarantees that there are always available socket connections in your Node.js app. This saves time when making a request, because opening a socket takes time (for additional info, see the blog post at www.madhur.co.in/blog/2016/09/05/nodejs-connection-pooling.html). The following listing shows how to add this option to the server.
import http from 'http'; 1 import bodyParser from 'body-parser'; import renderViewMiddleware from './middleware/renderView'; http.Agent({ 2 keepAlive: true, keepAliveMsecs: 1500, maxFreeSockets: 1024 });
You could also use GraphQL, which will greatly reduce the number of network calls you make. But that’s a topic for another book.
This isn’t a book about Node.js implementation and performance, but lots of good resources are available if you want to learn more about this topic. Here are some places to get started:
Another powerful server performance tool is caching. I’ve employed caching in different forms, including edge caching, in-memory caching, and saving views in a Redis (a NoSQL database) persisted cache. Each of these strategies has trade-offs, so it’s important to understand what these are and then pick the right strategy for your use case. Table 11.1 lists caching options.
SEO |
User sessions |
|
---|---|---|
In-memory | ✓ | ✓ |
Persisted storage | ✓ | (Higher overhead, but possible) |
Edge caching | ✓ |
The easiest (and most naïve) solution for caching involves saving components directly in memory. For simple apps, you can achieve this by using a basic LRU cache (size-limited) and stringifying your components after they’re rendered. Figure 11.3 shows a timeline of using an in-memory cache. The first user to load a page gets a fully rendered (and slower) version of the page. This is also saved in the in-memory cache. All subsequent users get the cached version, until that page gets pushed out of the cache because the cache filled up.
The following listing shows how to add a simple caching module (abstracting this code will make it easier to update caching strategies to match your future needs). You should add this code to the new cache.es6 file in the shared directory.
import lru from 'lru-cache'; 1 // maxAge is in ms const cache = lru({ 2 maxAge: 300000, 3 max: 500000000000, 4 length: (n) => { 5 // n = item passed in to be saved (value) return n.length * 100; } }); export const set = (key, value) => { 6 cache.set(key, value); }; export const get = (key) => { 7 return cache.get(key); }; export default { get, set };
Listing 11.8 shows how to take advantage of the caching module in renderView.jsx. Add its code to the module. Note that I recommend using either the caching logic or the streaming logic, but not both at the same time. If you want to cache and stream, you need a different streaming implementation than the one shown in this chapter.
import cache from '../shared/cache.es6'; 1 //..other code const cachedPage = cache.get(req.url); 1 if (cachedPage) { 2 return res.send(cachedPage); 2 } const store = initRedux(); //...more code Promise.all(promises).then(() => { //...more code cache.set(req.url, `<!DOCTYPE html>${html}`); 3 return res.send(`<!DOCTYPE html>${html}`); })
This strategy will work, but it has some problems:
Next, we’ll look at another strategy, caching with Redis, which will allow the caching to be done asynchronously and nonblocking. We’ll also look at using a smarter caching implementation to cache individual components, which scales better for more-complex applications.
The first isomorphic React app I worked on was written before Redux and React Router were stable community best-choice libraries, so we home-rolled a lot of the code. Combine this decision with React being slow on the server, and we needed a solution that would speed up server renders.
What we implemented was string storage of full pages in Redis. But storing full pages in Redis has significant trade-offs for larger sites. We had the potential for millions of entries to end up stored in Redis. Because full stringified HTML pages add up pretty fast, we were using quite a bit of space.
Thankfully, the community has come up with improvements on this idea since then. Walmart Labs put out a library called electrode-react-ssr-caching that’s easy to use to cache your server-side renders. This library is powerful for a couple of reasons:
In the long run, because of the number of pages we serve and the percentage of them that are served with 100% public-facing content, we ended up moving to an edge-caching strategy. But your use case may benefit from the Walmart Labs approach.
Edge caching is the solution we currently use for our isomorphic React app at work. This is due to some business logic needing to expire content on demand (when things change at other points in the system, as in a CMS tool). Modern CDNs such as Fastly provide this capability out of the box and make it much easier to manage TTLs (time to live) and to force-expire web pages. Figure 11.4 illustrates how this works.
Showing you how to implement this is beyond the scope of this book. If you have public-facing content that drives SEO (e-commerce, video sites, blogs, and so forth), you’ll definitely want a CDN in your stack.
One caveat with this approach is that it complicates user session management. The next section explores user sessions and covers the trade-offs with various caching strategies.
Modern web applications use cookies in the browser almost without exception. Even if your main product isn’t directly using cookies, any ads, tracking, or other third-party tools that you use on your site will take advantage of cookies. Cookies let the web app know that the same person has come back over time. Figure 11.5 illustrates how this works.
Listing 11.9 shows an example module that handles both the browser and server cookie parsing for you. It uses Universal Cookie to help manage the cookies in both environments: www.npmjs.com/package/universal-cookie. You need to install this library for the code to work:
$ npm install --save universal-cookie
Add the code in this listing to a new module src/shared/cookies.es6.
import Cookie from 'universal-cookie'; 1 const initCookie = (reqHeaders) => { let cookies; if (process.env.BROWSER) { 2 cookies = new Cookie(); } else if (reqHeaders.cookie) { cookies = new Cookie(reqHeaders.cookie); 3 } return cookies; }; export const get = (name, reqHeaders = {}) => { const cookies = initCookie(reqHeaders); 4 if (cookies) { return cookies.get(name); 5 } }; export const set = (name, value, opts, reqHeaders = {}) => { const cookies = initCookie(reqHeaders); 4 if (cookies) { return cookies.set(name, value, opts); 6 } }; export default { get, set };
Now that you’ve added a way to get and set cookies in both environments, you need to be able to store that information on the app state so you can access it in a consistent way in your application.
By fetching cookies with an action, you can standardize the way the app interacts with cookies. The following listing shows how to add a storeUserId action to fetch and store the user ID. Add this code to the app-action-creators file.
import UAParser from 'ua-parser-js'; import cookies from './cookies.es6'; 1 export const PARSE_USER_AGENT = 'PARSE_USER_AGENT'; export const STORE_USER_ID = 'STORE_USER_ID'; 2 export function parseUserAgent(requestHeaders) {} export function storeUserId(requestHeaders) { 3 const userId = cookies.get('userId', requestHeaders); 4 return { userId, 5 type: STORE_USER_ID 2 }; } export default { parseUserAgent, storeUserId };
Now you have access to the user ID in your application! It’ll be fetched on the server and can be updated later in the browser as needed. You can apply this concept to any and all user session information. Managing user sessions as a whole is beyond the scope of this chapter.
When I first started building isomorphic applications, user management seemed simple. You used cookies to track user sessions in the browser as you would in a single-page application. Adding in the server complicates this, but you can read the cookies on the server. As you add in caching strategies, this becomes less straightforward.
Both the in-memory and persisted storage caching strategies work better with user sessions, as each user request still goes to the server, allowing the user’s information to be gathered. You can add the user’s identifying information into your cache key.
But edge caching doesn’t work as well. That’s because for each unique user, you must keep a unique copy of each page that has user-specific data on it. If you don’t, you could end up showing user 1’s information to user 2. That would be bad! Figure 11.6 illustrates this concept.
If you need to use edge caching and you have user data, you can employ one or more of the following strategies (depending on your content type and your traffic patterns):
This chapter covered several topics that will make your production isomorphic app run better, including performance and caching. You also learned about the complexities of adding certain types of caching to an isomorphic app that deals with user sessions.
18.221.165.246