Chapter 11. Optimizing for production

This chapter covers

  • Optimizing performance for the browser
  • Using streaming on Node.js to improve server performance
  • Using caching to improve performance on the server
  • Handling user sessions via cookies on the server and browser

Rather than doing a deep dive into any specific topic, this chapter covers a range of topics that will make your app perform better and improve your end-user experience. This includes React performance, Node.js performance, and various caching strategies. The last section of this chapter covers handling cookies in an isomorphic application and the trade-offs that this creates with some of the caching strategies.

This chapter continues to use the complete-isomorphic-example repo on GitHub. It can be found at http://mng.bz/8gV8. The first section uses the code on branch chapter-11.1 (git checkout chapter-11.1). You can find the completed code for the chapter on the chapter-11.complete branch (git checkout chapter-11-complete).

To run each branch, make sure to use the following:

$ npm install
$ npm start

11.1. Browser performance optimizations

As I’ve spent more time with React, I’ve discovered that although it’s fast out of the box, in complex apps you’ll run into performance problems. To keep your React web app performant, you need to keep performance in mind as your app grows and adds more-complex feature interactions. Two specific cases start to cause performance issues as your app grows:

  • The size of your JavaScriptThe next section covers using webpack chunking to reduce bundle size.
  • Unnecessary rendersIn section 11.1.2, we’ll go over the basics of using shouldComponentRender to reduce unnecessary renders.

11.1.1. Webpack chunking

I’ve often experienced the following scenario when building an app of any kind: my app starts out small, and the JavaScript assets are small enough that they load quickly. Over time, I add features, get lazy about reviewing the size of included packages, and generally don’t pay attention to bundle size. (Code base size management is especially difficult on larger teams.) Then one day I check the load time of the page and realize my JavaScript file has become too big! It’s affecting the overall load time of the app. Cue freak-out moment!

Thankfully, webpack provides a way to solve this problem, by breaking the code into multiple bundles that can be loaded as they’re needed. The next two diagrams walk you through this concept.

Note

If you’re using React Router 4, check out appendix C for information on code splitting with webpack.

Figure 11.1 demonstrates how the app is currently compiling. All the code is being pulled together into a single output file. This file is being referenced in html.jsx.

Figure 11.1. The default behavior of webpack results in a single file that represents all your code.

Figure 11.2 shows what you’ll implement in this section. The code is still combined in webpack’s compilation step, but it’s then split into multiple JavaScript files. This is configured by you in code (not in webpack configuration).

Figure 11.2. Using code splitting, webpack outputs multiple files that can be dynamically loaded. The specific files will vary by app.

To make this happen in your code, you need to update the way you import the routes. This process has three steps:

1.  Add Babel plugins that will handle dynamic import on the Node.js server and via the Babel loader in your webpack configuration.

2.  Add dynamic import to sharedRoutes.

3.  Enable chunks in webpack with chunkFilename.

Installing and adding the new Babel plugins is the first step. Run the following commands in your terminal:

$ npm install --save-dev babel-plugin-syntax-dynamic-import
$ npm install --save-dev babel-plugin-dynamic-import-node

Then you need to update .babelrc. Listing 11.1 shows the updates that are needed. This is a significant change from the old version because you need different plugins for webpack and Node.js. Later you’ll make sure that webpack points at the webpack env config.

Listing 11.1. Add plugins to .babelrc
{
  "presets": ["es2015", "react"],
  "env": {                                      1
    "webpack-env": {                            2
      "plugins": [
        "syntax-dynamic-import"                 3
      ]
    },
    "development": {                            2
      "plugins": [
        "syntax-dynamic-import",                4
        "dynamic-import-node"                   4
      ]
    }
  },
  "plugins": [                                  5
    "transform-es2015-destructuring",
    "transform-es2015-parameters",
    "transform-object-rest-spread"
  ]
}

  • 1 Add an env config option.
  • 2 Add two environments: development (the default) and webpack-env for webpack builds.
  • 3 For webpack, add only the plugin that allows dynamic import syntax.
  • 4 For node, both the syntax and implementation plugins are required.
  • 5 The original plugins array is left intact. Env options are merged with any default options.

The main goal of the Babel config change is that it splits your Babel config into two versions, one for the server (development) and one for webpack (webpack-env).

Next, you have to add a dynamic import. Listing 11.2 shows how to create statements that tell webpack to create a code chunk. This replaces the import statements for components in sharedRoutes. For now, you’ll apply this pattern to a single route: cart. But in production apps, I recommend you apply this based on your own traffic patterns (chunk your highest traffic routes separately from your low-traffic routes, or chunk admin or other authenticated pages separately from public ones). Additionally, this code could be abstracted and made reusable for a production use case—but for this example, it illustrates the changes in a clear, concise way.

Listing 11.2. Configure code chunking—src/shared/sharedRoutes.jsx
// remove import Cart from '../components/cart';               1
<Route path="/" component={App} onChange={onChange}>
<IndexRoute component={Products} />
<Route
      path="cart"
     getComponent={(location, cb) => {                         2
       import(                                                 3
        /* webpackChunkName: "cart" */                         4
        /* webpackMode: "lazy" */                              4
        './../components/cart')                                3
       .then((module) => {                                     5
         cb(null, module.default);                             5
         onChange(null, {                                      6
           routes: [
             {component: module.default}
           ]

         });
       })
       .catch(error =>                                         7
         console.log('An error occurred while loading the component', error)
       );
      }}
    />

  • 1 Remove the old import statement for the cart component. I’ve commented here to demonstrate it gets removed, but you can delete it.
  • 2 Use the getComponent prop.
  • 3 Use async import. You pass the path to the cart component into it.
  • 4 Webpack reads these comments and uses them to determine how to handle the code chunk.
  • 5 Async import behaves like a Promise. Handle a success: take the loaded module and pass it to the React Router callback.
  • 6 The component isn’t yet loaded when onChange is called. By manually calling onChange, the data will still be loaded.
  • 7 Add error handling.

You’ll notice that you both replaced the default import with a dynamic load and moved that dynamic load into React Router’s getComponent property. The code will be lazy loaded (webpackMode: lazy); it won’t be loaded until the user navigates to this route. This is advantageous because it prevents unnecessary loading of code for features that the user hasn’t yet accessed.

Finally, it’s useful to name your webpack chunks. This is configurable in the webpack configuration file. The following listing shows you how to add this property to your webpack config file.

Listing 11.3. Named webpack chunks—webpack.config.js
module.exports = {
  // ... other config options
  output: {
    path: __dirname + '/src/',
    filename: "browser.js",
    chunkFilename: "browser-[name].js"         1
  },
  module: {}
};

  • 1 Add the chunkFilename option. Use [name] to indicate a dynamic naming of the compiled js file for each chunk.

In the next section, we’ll look at one way to improve React performance in situations where React’s base performance isn’t enough.

11.1.2. Should component render

As your app grows, you’ll run into situations where your components are running through their render cycles unnecessarily (I’ve seen situations where re-renders get into the tens of seconds). If this is happening and causing a measurable impact on your application, use shouldComponentRender to limit the number of renders.

Performance measurement tools

Before making performance improvements, you should always profile your application. Record the performance metric you’re measuring in the current version of your app. Then make any performance updates. Finally, measure the same performance metric to confirm that your changes had a positive impact.

To get started profiling web apps, you should become an expert on the Chrome DevTools performance panels: http://mng.bz/a9wf.

The best way to implement shouldComponentRender without causing yourself later pain and headaches is to make sure your properties are being created with immutable patterns. In this application, this is already being handled in the Redux reducers. By creating immutable objects, the object references change, and a shallow comparison becomes enough to check whether two objects are different from one another. Listing 11.4 shows you how to do this in the context of a Detail Page component. Add the code from the listing to detail.jsx.

Warning

Using shouldComponentRender can send you down a rabbit hole of despair. Use it sparingly and use it wisely. (You can end up with complex, large functions that are calculating whether to render. That’s bad and should be avoided.)

Listing 11.4. Block renders, shouldComponentRender—src/components/detail.jsx
componentDidMount() {}

shouldComponentUpdate(nextProps) {
  if (this.props.name === nextProps.name &&                     1
      this.props.description === nextProps.description &&
      this.props.details === nextProps.details &&
      this.props.price === nextProps.price &&
      this.props.thumbnail === nextProps.thumbnail) {
    return false;                                               2
  }
  return true;                                                  3
}

componentDidUpdate() {}

  • 1 Check each property to make sure it hasn’t changed.
  • 2 Return false if nothing has changed; this prevents the component from rendering.
  • 3 Return true if there have been changes; this allows the normal render to execute.

This will work for many situations but it’s not without gotchas. One problem is this implementation requires you to write many checks based on the implementation details of the properties. You could abstract the concept of this code—comparing each property on this.props/nextProps—into a function that can be reused for many components. The online article “Performance Optimisations for React Applications” by Alex Reardon (http://mng.bz/QJk3) covers using shouldComponentUpdate with additional detail. It includes a sample implementation of an abstracted deep equals function that does only reference checks. Check out the code on GitHub (http://mng.bz/q7yU).

Finally, if you need shouldComponentUpdate to only do a shallow comparison on both the props object and the state object, you can use React Pure Component. This is a class provided by React. You can find the docs at https://reactjs.org/docs/react-api.html#react.purecomponent.

Unfortunately, a deeper dive into React performance optimizations is outside the context of this book. Fortunately, many other great resources are available on this topic! Here are some that will take you deeper:

In the next section, we’ll look at server-side performance improvements that can be used to have a positive impact on your Node.js application.

11.2. Server performance optimizations

In an isomorphic app, your server’s performance is just as important as the browser performance. When we first started working with React at work, it greatly simplified building our pages for search bots. But we quickly realized that React’s server render was slower than we’d have liked. Creating a fully rendered string output for many components takes time and is a process-blocking task on the server. This limits the number of requests per second that your server is able to handle.

In the rest of this section, and in the caching section, I’ll discuss strategies you can use to improve your server performance times:

  • Using streaming concepts to respond to requests sooner
  • Adding connection pooling to manage multiple HTTP requests from the server

The first thing you’ll add is the ability for your page response to be streamed to the browser. If you’re following along and want to switch to this section’s GitHub branch, check out chapter-11.2 (git checkout chapter-11.2).

11.2.1. Streaming React

If your main goal is to improve time to first byte and allow the DOM to start processing as soon as possible, streaming your rendered page response can be a good solution. Node.js streams are a way to represent large amounts of data and deliver it over time. Rather than wait for an entire HTML page to download, the page can be delivered in chunks over time (more info on streams at http://mng.bz/s91m).

By turning the server response into a stream, you improve the speed at which the browser can begin downloading and displaying the HTML. Listing 11.5 shows how to use the react-dom-stream library to render streams instead of strings. Add the code to renderView.jsx. You also need to run the following command before this code will work:

$ npm i --save react-dom-stream

You can find more information about this package at https://github.com/aickin/react-dom-stream. Note that it hasn’t been fully upgraded to work with the latest React, so you may not want to use it in production, but it illustrates the streaming concept well.

Listing 11.5. Set up streaming library—src/middleware/renderView.jsx
import React from 'react';
import {
  renderToString
} from 'react-dom-stream/server';                         1
import { Provider } from 'react-redux';

const streamApp = renderToString(                         2
  <Provider store={store}>
    <RouterContext routes={routes} {...renderProps} />
  </Provider>
);

const streamHTML = renderToString(
  <HTML
    html={streamApp}                                      3
    serverState={stringifiedServerState}
    metatags={seoTags}
    title={title}
  />
);

streamHTML.pipe(res, { end: false });                     4
streamHTML.on('end', () => {                              5
  res.end();                                              6
});

  • 1 Instead of importing the React version of renderToString, use the streaming library’s version.
  • 2 The initial render of the app components gets converted to the creation of a stream. Rename the variable for better context.
  • 3 React DOM Stream supports nested streams in JSX. Now we pass the stream into HTML.jsx.
  • 4 Instead of responding to the request directly, the stream library pipes the render into the response.
  • 5 Add a listener for the end of the stream.
  • 6 Close the response.

11.2.2. Connection pooling

In addition to using React, we also use GraphQL at work. This enables us to gather data from many microservices. More importantly, it also allows us to request the data we need for our views rather than use REST APIs with predetermined responses. Think of it as a front end for your back-end REST services. You can learn more about GraphQL at http://graphql.org.

This is a powerful setup, but GraphQL makes a lot of network calls. We ran into an issue of network calls that were timing out. The services we were talking to didn’t show any time-outs; they showed fast response times. After much investigation, the team figured out that we were making so many requests that the call stack was causing some of the requests to time out. The call stack became a block, and requests were timing out before they had a chance to receive any response.

This can also happen in your React isomorphic app. If a page of your app makes a lot of network calls for a specific view, you might run into this slow network request problem. One strategy to fix this is to enable connection pooling on your Node.js server.

The solution to this problem in Node.js is to create a permanent connection pool to reduce the cost of opening connections. A connection pool guarantees that there are always available socket connections in your Node.js app. This saves time when making a request, because opening a socket takes time (for additional info, see the blog post at www.madhur.co.in/blog/2016/09/05/nodejs-connection-pooling.html). The following listing shows how to add this option to the server.

Listing 11.6. Enable connection pooling—src/app.es6
import http from 'http';                                         1
import bodyParser from 'body-parser';
import renderViewMiddleware from './middleware/renderView';

http.Agent({                                                     2
  keepAlive: true,
  keepAliveMsecs: 1500,
  maxFreeSockets: 1024
});

  • 1 Import the http module.
  • 2 Before the rest of the code on the server, setting the options for http.Agent.keepAlive: true tells the server to reuse connections. The other options can be adjusted to fit your use case.

You could also use GraphQL, which will greatly reduce the number of network calls you make. But that’s a topic for another book.

Node performance

This isn’t a book about Node.js implementation and performance, but lots of good resources are available if you want to learn more about this topic. Here are some places to get started:

11.3. Caching

Another powerful server performance tool is caching. I’ve employed caching in different forms, including edge caching, in-memory caching, and saving views in a Redis (a NoSQL database) persisted cache. Each of these strategies has trade-offs, so it’s important to understand what these are and then pick the right strategy for your use case. Table 11.1 lists caching options.

Table 11.1. Comparing caching options
 

SEO

User sessions

In-memory
Persisted storage (Higher overhead, but possible)
Edge caching  

11.3.1. Caching on the server: in-memory caching

The easiest (and most naïve) solution for caching involves saving components directly in memory. For simple apps, you can achieve this by using a basic LRU cache (size-limited) and stringifying your components after they’re rendered. Figure 11.3 shows a timeline of using an in-memory cache. The first user to load a page gets a fully rendered (and slower) version of the page. This is also saved in the in-memory cache. All subsequent users get the cached version, until that page gets pushed out of the cache because the cache filled up.

Figure 11.3. In-memory caching allows some requests to benefit from faster response times.

The following listing shows how to add a simple caching module (abstracting this code will make it easier to update caching strategies to match your future needs). You should add this code to the new cache.es6 file in the shared directory.

Listing 11.7. Add an LRU in memory cache—src/shared/cache.es6
import lru from 'lru-cache';                             1

// maxAge is in ms
const cache = lru({                                      2
  maxAge: 300000,                                        3
  max: 500000000000,                                     4
  length: (n) => {                                       5
    // n = item passed in to be saved (value)
    return n.length * 100;
  }
});

export const set = (key, value) => {                     6
  cache.set(key, value);
};

export const get = (key) => {                            7
  return cache.get(key);
};

export default {
  get,
  set
};

  • 1 Import the lru cache.
  • 2 Create the lru cache.
  • 3 maxAge sets a time-based expiration for values stored in the cache.
  • 4 max is the total allowed length of all items in the cache.
  • 5 length is the individual max allowed length for each value added.
  • 6 This is a public set method that sets the key/value pair on the cache.
  • 7 This is a public get method that retrieves a value based on a key from the cache.

Listing 11.8 shows how to take advantage of the caching module in renderView.jsx. Add its code to the module. Note that I recommend using either the caching logic or the streaming logic, but not both at the same time. If you want to cache and stream, you need a different streaming implementation than the one shown in this chapter.

Listing 11.8. Save and fetch cached pages—src/middleware/renderView.jsx
import cache from '../shared/cache.es6';                1

//..other code

const cachedPage = cache.get(req.url);                  1
if (cachedPage) {                                       2
  return res.send(cachedPage);                          2
}

const store = initRedux();
//...more code
Promise.all(promises).then(() => {
  //...more code
  cache.set(req.url, `<!DOCTYPE html>${html}`);        3
  return res.send(`<!DOCTYPE html>${html}`);
})

  • 1 Try to retrieve the value from the cache by using the cache module from listing 11.7.
  • 2 If the value exists, use it to respond to the request.
  • 3 If a full page render is required, save the rendered page before responding to the request.

This strategy will work, but it has some problems:

  • This solution is simple, but what happens when the use cases get more complex? What happens as you start to add users? Or multiple languages? Or you have tens of thousands of pages? This methodology doesn’t scale well to these use cases.
  • Writing to memory is a blocking task in Node.js, which means that if you’re trying to optimize for performance by using a cache, you’re trading one problem for another.
  • Finally, if you’re using a distributed scaling strategy to run your servers (which is common these days), the cache applies to only a single box or container (if using Docker). In this case, your server instances can’t share a common cache.

Next, we’ll look at another strategy, caching with Redis, which will allow the caching to be done asynchronously and nonblocking. We’ll also look at using a smarter caching implementation to cache individual components, which scales better for more-complex applications.

11.3.2. Caching on the server: persisted storage

The first isomorphic React app I worked on was written before Redux and React Router were stable community best-choice libraries, so we home-rolled a lot of the code. Combine this decision with React being slow on the server, and we needed a solution that would speed up server renders.

What we implemented was string storage of full pages in Redis. But storing full pages in Redis has significant trade-offs for larger sites. We had the potential for millions of entries to end up stored in Redis. Because full stringified HTML pages add up pretty fast, we were using quite a bit of space.

Thankfully, the community has come up with improvements on this idea since then. Walmart Labs put out a library called electrode-react-ssr-caching that’s easy to use to cache your server-side renders. This library is powerful for a couple of reasons:

  • It comes with a profiler that will tell you which components are most expensive on the server. That allows you to cache only the components you need to.
  • It provides a way to template components so you can cache the rendered components and insert the properties later.

In the long run, because of the number of pages we serve and the percentage of them that are served with 100% public-facing content, we ended up moving to an edge-caching strategy. But your use case may benefit from the Walmart Labs approach.

11.3.3. CDN/edge strategies

Edge caching is the solution we currently use for our isomorphic React app at work. This is due to some business logic needing to expire content on demand (when things change at other points in the system, as in a CMS tool). Modern CDNs such as Fastly provide this capability out of the box and make it much easier to manage TTLs (time to live) and to force-expire web pages. Figure 11.4 illustrates how this works.

Figure 11.4. Adding an edge server moves the caching in front of the server.

Showing you how to implement this is beyond the scope of this book. If you have public-facing content that drives SEO (e-commerce, video sites, blogs, and so forth), you’ll definitely want a CDN in your stack.

One caveat with this approach is that it complicates user session management. The next section explores user sessions and covers the trade-offs with various caching strategies.

11.4. User session management

Modern web applications use cookies in the browser almost without exception. Even if your main product isn’t directly using cookies, any ads, tracking, or other third-party tools that you use on your site will take advantage of cookies. Cookies let the web app know that the same person has come back over time. Figure 11.5 illustrates how this works.

Figure 11.5. Repeat visits by the same user on the server. Saving cookies lets you store information about the user that can be retrieved during future sessions.

Listing 11.9 shows an example module that handles both the browser and server cookie parsing for you. It uses Universal Cookie to help manage the cookies in both environments: www.npmjs.com/package/universal-cookie. You need to install this library for the code to work:

$ npm install --save universal-cookie

Add the code in this listing to a new module src/shared/cookies.es6.

Listing 11.9. Using isomorphic cookie module—src/shared/cookies.es6
import Cookie from 'universal-cookie';                            1

const initCookie = (reqHeaders) => {
  let cookies;
  if (process.env.BROWSER) {                                      2
    cookies = new Cookie();
  } else if (reqHeaders.cookie) {
    cookies = new Cookie(reqHeaders.cookie);                      3
  }
  return cookies;
};

export const get = (name, reqHeaders = {}) => {
  const cookies = initCookie(reqHeaders);                         4
  if (cookies) {
    return cookies.get(name);                                     5
  }
};

export const set = (name, value, opts, reqHeaders = {}) => {
  const cookies = initCookie(reqHeaders);                         4
  if (cookies) {
    return cookies.set(name, value, opts);                        6
  }
};

export default {
  get,
  set
};

  • 1 Import the universal cookie library, which handles the differences between accessing browser and server cookies for you.
  • 2 Check the environment to determine whether reqHeaders are needed.
  • 3 If the headers have cookies, pass this into the cookie constructor.
  • 4 In the getter and setter functions, initialize the cookie object, passing in reqHeaders so it works on the server.
  • 5 Return the result of the cookie lookup.
  • 6 Return the result of setting the cookie. In addition to a name and value, you can pass in all standard cookie options. In most cases you’ll call set from the browser.

Now that you’ve added a way to get and set cookies in both environments, you need to be able to store that information on the app state so you can access it in a consistent way in your application.

11.4.1. Accessing cookies universally

By fetching cookies with an action, you can standardize the way the app interacts with cookies. The following listing shows how to add a storeUserId action to fetch and store the user ID. Add this code to the app-action-creators file.

Listing 11.10. Accessing cookies on the server—src/shared/app-action-creators.es6
import UAParser from 'ua-parser-js';
import cookies from './cookies.es6';                        1

export const PARSE_USER_AGENT = 'PARSE_USER_AGENT';
export const STORE_USER_ID = 'STORE_USER_ID';               2

export function parseUserAgent(requestHeaders) {}

export function storeUserId(requestHeaders) {               3
  const userId = cookies.get('userId', requestHeaders);     4
  return {
    userId,                                                 5
    type: STORE_USER_ID                                     2
  };
}

export default {
  parseUserAgent,
  storeUserId
};

  • 1 Import the cookie module.
  • 2 Add a type for the new action.
  • 3 Add the action, which takes in requestHeaders so that it works on the server.
  • 4 Pass the cookie name and requestHeaders to the cookie module.
  • 5 Put the userId value on the action.

Now you have access to the user ID in your application! It’ll be fetched on the server and can be updated later in the browser as needed. You can apply this concept to any and all user session information. Managing user sessions as a whole is beyond the scope of this chapter.

11.4.2. Edge caching and users

When I first started building isomorphic applications, user management seemed simple. You used cookies to track user sessions in the browser as you would in a single-page application. Adding in the server complicates this, but you can read the cookies on the server. As you add in caching strategies, this becomes less straightforward.

Both the in-memory and persisted storage caching strategies work better with user sessions, as each user request still goes to the server, allowing the user’s information to be gathered. You can add the user’s identifying information into your cache key.

But edge caching doesn’t work as well. That’s because for each unique user, you must keep a unique copy of each page that has user-specific data on it. If you don’t, you could end up showing user 1’s information to user 2. That would be bad! Figure 11.6 illustrates this concept.

Figure 11.6. When the edge has to cache pages per user, the benefit of overlapping requests is lost.

If you need to use edge caching and you have user data, you can employ one or more of the following strategies (depending on your content type and your traffic patterns):

  • Create pages that have either user content or general consumption content (public). Then cache only the pages that are public on your edge servers.
  • Save a cookie that tells the edge server whether the user is in an active user session. Use this information to determine whether to serve a cached page or send the request to the server (pass through).
  • Serve pages with placeholder content (solid shapes that show where content will load) and then decide what content to load in the browser.

Summary

This chapter covered several topics that will make your production isomorphic app run better, including performance and caching. You also learned about the complexities of adding certain types of caching to an isomorphic app that deals with user sessions.

  • Use webpack chunking to improve browser performance.
  • Optimize render cycles with shouldComponentRender.
  • Improve the server’s performance with streaming and connection pooling.
  • Apply one of three caching strategies (in-memory, persisted, or edge) to improve render times on the server.
  • Manage user sessions via cookies on the browser and the server.
  • Understand the effects of caching strategies on user session management.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.165.246