11 Performance is key

This chapter covers:

  • Examining how to measure performance when multiple micro frontends exist on one page
  • How to find regressions and bottlenecks and attribute them to the right team
  • Typical performance drawbacks that are consequential to the micro frontends architecture
  • Reducing the amount of required JavaScript by sharing larger vendor libraries across teams
  • Implementing library sharing without compromising team independence

In 2014 my colleague Jens handed me an article 1 written by a company that implemented a vertical style architecture. Back then, the term micro frontends didn’t exist. Being a frontend developer who takes pride in delivering fast user experiences, my first gut reaction to this idea was rejection--strong rejection. “Five teams that all roll their own frontend? This sounds like a lot of overhead. The result will surely be inefficient and slow.”

Today, when I introduce micro frontends to developers, I often get a similar reaction. They understand the concept and its benefits, but sacrificing performance for increased development speed can be hard to swallow. Having worked in micro frontend projects over the last years, my initial worries faded quickly. This does not mean that my concerns were unfounded or magically resolved themselves. Autonomy inherently comes with the cost of accepting redundancy. But I learned to focus on the bottlenecks that have a real impact for our users instead of reflexively fighting code duplication.

The micro frontend projects we built all outperformed the monolith they replaced. This resulted in faster responses, less code shipped to the browser, and better overall load times. One factor these projects had in common was that architecting for excellent performance was a top priority from the start and not an afterthought. Another significant benefit I experienced while working with micro frontends was that the architecture made it easier to optimize the user experience in the places where it made the most significant difference. But more on this later.

In this chapter, you’ll learn how to address performance in your micro frontends project. We’ll start with the “definition of fast.” What does “performant” mean for the different parts of your project? Measuring performance and acting upon the results is tricky when the frontend includes code from different teams. I’ll show you some strategies that have proven valuable when architecting for excellent performance. At the end of the chapter, you’ll learn how to keep your JavaScript overhead to a healthy minimum, avoiding large redundant framework downloads while still being able to deploy autonomously.

11.1 Architecting for performance

Early on in the project, Finn, Tractor Models, Inc.'s lead architect, arranged a meeting with developers from all three teams. Together they defined some performance guidelines that act as the default for all pages of the shop. They decided that the total weight of a page should never exceed 1MB of data. The viewport of a page must render in one second under good conditions and three seconds under 3G network conditions.

11.1.1 Different teams, different metrics

They arrived at these values by looking at their competitors' websites. The team knows that excellent performance is vital for e-commerce. Users enjoy sites that feel fast. They spend more time browsing and have a higher chance of actually buying a tractor. But what does feel fast actually mean? Figure 11.1 illustrates this. Different parts of the site have different performance requirements:

  • A user that opens the home page for the first time mainly cares about seeing the content without waiting.

  • On the product page, the main image (hero image) is most important and should be one of the first items to load.

  • When the user enters the checkout process, it’s all about interaction--trusting the system while entering personal data. For that, the software must react reliably and swiftly.

 

Figure 11.1 The metrics a team should optimize to depend on their use case. The performance expectations of the homepage are not the same as the performance goals for the checkout process.

Having some overarching rules that act as a performance baseline is good. You can see them as basic hygienic requirements. But if you want to optimize further, the metrics a team should focus on will vary depending on the context the user is in. Each team must understand the performance requirements of its subdomain and pick their own metrics.

11.1.2 Multi-team performance budgets

Picking a metric and defining a concrete limit is also called a performance budget. 2 A performance budget is a perfect tool to establish a performance-oriented culture inside a team. The mechanism is simple:

  • Your team defines a concrete budget for a specific metric. Say, your site should never be bigger than 1MB.

  • You continuously measure this metric to ensure your site stays in budget. Lighthouse CI, sitespeed.io, Speedcurve, Calibre, Google Analytics, and so on are useful tools for that.

  • If a new feature breaks your site’s budget, the development stops. Developers investigate the cause of the degradation. Then the complete team, including product managers, discusses options to get back into budget: rolling back the change, implementing an optimization, or even removing another feature from the page.

Budgets are a powerful mechanism for fostering performance discussions on a regular basis. But how can budgets work for a site with micro frontends from multiple teams? Should Team A stop their development because Team B’s micro frontend slows down the page? Maybe!

You can address this in different ways:

  • Dividing the budget to all micro frontends. This is the analytical approach. This would mean that a page containing five micro frontends could, for example, use 500KB for the content of the page itself and grant each included micro frontend a budget of 100KB. Adding everything up will get us to our 1MB (500KB + 5 * 100KB) size budget. In theory, this works, and for metrics like bytes and server response times, it’s possible to measure and sum up the pieces like that. But for metrics like load time, lighthouse score, or time to interactive, it’s not that linear.

  • Page owners are responsible. This is the social approach. Here budgets are always on a page level. A page owner is in charge of staying in that budget. In our example, Team Decide would be responsible for the product page. The team’s goal is to provide the user with the best experience possible. Should an included micro frontend use an unreasonable amount of resources, Team Decide contacts the owner, explains, and discusses options. You can view an includable micro frontend as a guest that tries to be as well-behaved as possible.

We’ve had good experiences with the latter approach. It avoids getting into the weeds with too fine-grained budgets: Should a recommendation strip consume 100KB, or is 150KB more reasonable? The responsibility is very clear. When the product page is slow, it’s Team Decide that needs to become active. Yes, the team might not have caused the issue, but it’s their task to get back to a performant state by finding the cause of the problem and informing the right team.

For the page-owning team, this might feel cumbersome at first. But in practice, this has worked well for us. No team that provides an includable micro frontend wants to be the “slow kid” that’s holding everyone back. Teams started to measure the performance of their micro frontends in isolation to detect regressions before they go to production.

11.1.3 Attributing slowdowns

Team Decide installed a large dashboard screen in their office area. It shows the performance of their system with live updating charts and big green numbers. One day the team came back from lunch and noticed that the average load time of the product page’s main image tripled. Before, the product images rendered in around 300 ms; now it takes nearly a second. The team checked their last commits but didn’t find any suspicious change that could have caused such an issue. They checked the site in the browser. It didn’t look broken.

They figured that an included micro frontend from another team might be responsible. Since they use server-side integration, this slowdown can be due to a service that has issues producing its micro frontend’s markup (see chapter 4). They opened up the centralized metrics system where they could see the response times of every endpoint in the platform. None of the endpoints that are used to assemble the product page’s markup showed anomalies.

Then they checked their web performance monitoring tool. This tool opens the product page on a regular basis with a real web browser, recording a video of the process and also storing the browser’s network graph. With this tool, the team was able to compare the product page from before lunch with the current slower version. This before-and-after comparison highlighted the real issue. Before, the user loaded four images--Team Decide’s big hero image and three images from the recommendation strip. Now the network graph shows 13 images. With this information and a bit of digging, it became apparent that the recommendation strip was the cause of the slowdown.

It turned out that Team Inspire implemented a carousel feature for their recommendations. Now users can tap on a small arrow to see more matching products. But this simple carousel implementation did not feature any lazy loading. Even though the user only sees three recommendation images at a time, all images from the carousel load up-front. Jeremy, Team Decide’s product owner, walked over to Team Inspire’s office space and explained the issue. Team Inspire rolled back their carousel feature. They implemented lazy loading for the images and reintroduced the optimized version the next day.

Observability

Debugging a distributed system is a challenging task. The root of a problem is not always visible. Investing in proper monitoring will make finding issues much more manageable. If you integrate markup on the server, it’s crucial to know how long the different parts of the page take to produce. Having a central view with the deployments of all teams can help to correlate a measured effect to a specific change in the system.

Monitoring the code that runs in the browser can be tricky. The software from all teams has to share bandwidth, memory, and CPU resources. Having video recordings, network graphs, and metrics you can compare over time is a vital first step. Implementing unique team prefixes for all resources the browser loads also helps (see chapter 3). This way, the ownership of a file that looks suspicious is always evident.

Isolation

A popular debugging technique is isolating the issue. Imagine you’ve written a piece of software and notice a mysterious bug that you can’t explain. A good strategy for finding the root cause is to comment out parts of your code and check if the bug is still present.

You can apply the same approach to a micro frontends website. The “Block URL” feature in the Network tab of your favorite browser is your friend. Block the scripts or styles from a specific team. This way, you can test your site without the code of Team B or Team C and measure if the performance degradation or error still exists.

11.1.4 Performance benefits

I often talk about performance challenges the micro frontends concept introduces. But this approach comes with a couple of positive properties this approach.

Built-in code splitting

With the move to HTTP/2, it has become a best practice to split an application’s JavaScript and CSS code into smaller pieces. We talked about this in chapter 10. Delivering the code into smaller chunks (per team, per micro frontend) and not into one monolithic blob has benefits:

  • Cacheability--Browsers only need to redownload the parts of the code that changed. Not everything. Micro frontends are often used in conjunction with continuous delivery, where teams deploy to production several times a day.

  • Fewer long-running tasks--The browser’s main thread becomes unresponsive as it processes a JavaScript file. Loading multiple smaller files gives it more room to breathe and accept user input in-between processing the JavaScript resources.

  • On-demand loading--Since the assets are often grouped by team or micro frontend, it’s easy to include only the code a page needs or implement route-based loading as we saw in the single-spa example. The user doesn’t have to download the code for the cart page when they visit the homepage.

These benefits are by no means exclusive to micro frontends. You can also achieve these optimizations in a well-architected monolithic frontend. But the way you think about and develop features in a micro frontends project naturally guides you toward this structure.

Optimizing for the use case

Developers working in a micro frontends team have a much narrower scope. A team focuses on one specific set of use cases to help the customer. It’s in the interest of the team to optimize this use case as much as possible.

Let me give you an example. Imagine Team Inspire is responsible for displaying promotional teasers in different areas of the shop. These images or videos are often large in size and have a considerable performance impact. Since the team controls the complete process from teaser creation, uploading, and delivery, it’s easy for them to experiment with new file formats like WebP, AV1, or H.265 to speed up teaser loading.

They don’t have to think about what a format switch would mean for product images or user uploaded review videos. This focus on teasers allows them to move quicker. No big meetings with everyone who has an opinion about image or video formats. No grand rollout plans. No big business case calculations. No compromises. The team has everything it needs to improve its teasers.

After the experiment, Team Inspire shares their learnings with the other teams. They’re helping the other teams to not fall into the same traps when they try something similar.

Being able to have this kind of focus and control is the biggest strength of a micro frontends architecture. Not only can it lead to better web performance of the individual pieces, but it improves quality and increases user focus.

Easier changes

The narrower scope makes it possible for a developer to know every aspect of the software--something that is not possible in sizeable monolithic frontend projects. Have you ever deleted an old dependency that isn’t in use anymore, only to realize two days later that you broke an obscure marketing page you didn’t even know existed? I definitely have.

Having clearly isolated micro frontends reduces the risk of cleaning up dramatically. This makes it easier to lose old cruft and evolve the software.

11.2 Reduce, reuse... vendor libraries

The most discussed performance optimization topic for micro frontends is how to deal with libraries that are the same across teams. Downloading the same code twice triggers the Pavlovian reflex for all frontend developers: “This is inefficient, and we must avoid it!” But let’s take a step back and question this reflex. We’ll take a more analytical look at the topic of redundant code.

11.2.1 Cost of autonomy

Tractor Models, Inc.’s three development teams all chose to go with the same JavaScript framework to build their frontend. Before they started the project, lead architect Finn and the teams discussed three different options:

  1. Alignment --Everyone uses the same framework.

  2. No constraints --Every team can choose the framework they want.

  3. Some constraints --Free choice, as long as the framework has specific properties, like having a runtime that’s smaller than 10 KB.

Each option has its benefits and drawbacks. They went for the everyone uses the same framework option for two reasons. First, teams can help each other because they are all familiar with the same stack. Second, recruiting is easier: developers switching teams get up to speed quickly, and human resources can use the same job profile for all teams.

Although all teams start with the same tech stack, lead architect Finn emphasized that this decision is not set in stone. It should be possible for a new team to pick another stack if there are good reasons. The same goes for version upgrades or migration to a newer, better framework in the future. Teams must maintain their autonomy. All integration techniques and architecture-level artifacts have to be technology agnostic.

They use the JavaScript framework to generate server-side markup. However, for interactive features, the framework also needs to run in the browser. Each team has its own git repository and a dedicated deployment pipeline. The team’s JavaScript bundler builds an optimized asset file that’s self-contained. It includes everything that the team uses. If teams use the same dependency, the client will download it multiple times. We can optimize this by providing the large framework code as a separate download from a central place. See the example in figure 11.2.

Figure 11.2 A team’s JavaScript should be self-contained. It should be able to function on its own. That’s why bundling all dependencies and vendor libraries is the easiest option (left side). When all teams use the same framework, it could be a worthwhile optimization to host the framework code in a central place (right side). This reduces the amount of network traffic and lowers memory footprint and CPU usage on the user’s device.

The figure shows three teams using the same framework. In our case, the framework code makes up for 50% of the team’s bundle size. Removing the framework from the team bundles and providing it from a central place decreases the JavaScript size by 33%. The user saves two framework downloads. This sounds like a good optimization, but before we go ahead and build this, we should look at real numbers and the project’s demands.

11.2.2 Pick small

The amount of overhead obviously depends on the framework and other libraries you choose. Going with a large framework like Angular will increase the need to centralize vendor code. 3 Even though the major big frameworks have gained a lot of popularity, you can see a trend toward adopting smaller libraries and frameworks.

Picking a stack like Preact, hyperapp, lit-html, or Stencil will reduce framework overhead. Tools like Svelte go even a step further. They don’t have vendor runtime code at all. The source code gets transpiled to native DOM operations. This way, your JavaScript bundle grows proportionally with the features you build. No fixed costs are introduced by the framework.

No worries, we won’t go into the “What’s the best framework?” discussion. Comparing a batteries-included framework like Angular to the small templating library like lit-html is an apples-to-oranges comparison. However, since the individual micro frontends are smaller in scope, it can be a viable option not to pick an almighty framework that has you covered for everything the future can bring. It might be worth going with a leaner option that’s better tailored to your use case. If your bundle includes little vendor code, the overhead of loading it multiple times diminishes.

The other factor you should take into consideration is the team boundaries. How much composition does a typical page require? If you don’t use composition at all and every team manages its own set of pages, there is no overhead when loading a page. You only have the disadvantage that vendor libraries aren’t cached between the pages of different teams. But the importance of minimizing redundancy increases with the number of different teams that run a micro frontend on one page. Figure 11.3 shows a rough calculation that gives you an idea of how much code we are talking about.

Figure 11.3 The potential savings depend on the portion of vendor code the teams include and the number of teams that are active on one page. Using small dependencies reduces the overhead noticeably. If multiple teams include larger libraries, you can save a lot of JavaScript by centralizing vendor code.

Now we have rough numbers to qualify the overhead in bytes. It’s still essential to measure the real performance implications for your use case and target audience. Intelligent on-demand loading and good code splitting can make a more significant difference than shaving an extra 25 KB off your JavaScript bundle.

11.2.3 One global version

The teams at Tractor Models, Inc. decided that centralizing their framework code is an optimization worthwhile pursuing. They wanted to start with the most straightforward implementation possible. When we can assume that all teams are on the same version of one framework, we can use a low-tech solution:

  1. Including the framework as a global script tag.

  2. Excluding the framework from the team bundles and referencing it once from a global location.

The associated HTML code can look like this.

Listing 11.1 team-decide/index.html

<body>
  ...
  <script src="/shared/react.16.11.0.min.js"></script>
  <script src="/shared/react-dom.16.11.0.min.js"></script>
 
  <script src="/decide/static/bundle.js" async></script>
  <script src="/inspire/static/bundle.js" async></script>
  <script src="/checkout/static/bundle.js" async></script>
</body>

The React script tags attach their code to the window object. Teams can call it via window.React or window.ReactDOM. All bundlers provide an option to mark a library as “globally available.” Webpack calls this concept externals. This removes the code from the bundle and replaces it with a reference to a given variable. The configuration for Webpack can look like this.

Listing 11.2 team-decide/webpack.config.js

const webpack = require("webpack");
 
module.exports = {
  externals: {
    react: 'React',
    'react-dom': 'ReactDOM'
  }
  ...
};

Voilà! That’s it. We’ve eliminated the redundant framework code.

But we’ve created a new central artifact (/shared/...) which someone must maintain. Tractor Models, Inc. decided not to instantiate a dedicated platform team. Instead, one of the feature teams should take over responsibility. Team Checkout volunteers to do the job. This team now ensures that the files get deployed to the correct location and coordinates version upgrades with their neighbor teams.

11.2.4 Versioned vendor bundles

The centralization worked well and improved performance measurably. Keeping the React version up-to-date was also not a big issue for Team Checkout. Every time a new React version came out, they informed all teams they needed to test their software against it, deployed the new files to the shared folder two days later, and ensured that the markup references the new script.

But when React 17, the next major version, was announced, it became complicated. It included breaking changes requiring the teams to restructure parts of their existing software.

Team Checkout and Team Decide made the required changes to their codebase in the first week after the announcement. However, they were not able to deploy their migration because Team Inspire wasn’t ready. This team was in the middle of a major rewrite of their recommendation algorithms to bring personalized product suggestions to the next level. Moving to the next version of React at the same time was not an option for them. This task had to wait until the algorithm update shipped. So the other teams had no choice but to park their changes in a git branch and wait for the other team.

Three weeks later, Team Inspire managed to get their React migration done--paving the way to finally ship the new framework. The teams agreed on a day and time for the deployment of all software systems and the updated React library. Otherwise, the functionality of the site might be faulty if the central framework did not match with the application code.

This is what’s often referred to as a lock-step deployment. If you’re operating on a smaller scale, it might be fine to do such a manually orchestrated deployment from time to time. It gets extra fun when one team discovers that they need to roll back to the previous version because they found a severe bug. Then we have a lock-step rollback. These kinds of activities are exhausting and unsatisfactory. Furthermore, it contradicts the autonomous deployments paradigm of micro frontends.

A solution to this problem is to move away from one central framework to a versioned approach. Figure 11.4 shows a deployment process where two teams upgrade from Vue.js 2 to Vue.js 3:

  1. Before the migration, both teams reference version 2.

  2. Vue.js 3 gets published as a shared library.

  3. Team B migrates first and deploys their software, which now references the new framework.

  4. Team A migrates as well. Now both teams are on version 3. The old version 2 is not referenced any more.

 

Figure 11.4 Illustrates a framework upgrade process. Team A and B both reference Vue.js v2 from a central location. The framework is loaded once. Now Team B migrates to Vue.js v3. Now the user has to download two Vue.js libraries (v2 and v3). In the last step, Team A also migrates to the latest version. At this point, both teams use Vue.js v3 and the user must only download one version.

With this approach, both teams can upgrade at their own pace. They control which version of the library their code references. Even a rollback is possible without having to coordinate with another team. The only drawback is that the total download size increases during the migration phase.

There are a lot of different ways of achieving this. Let’s explore some possible solutions.

Webpack DllPlugin

TIP You can find the sample code for this in the 19_shared_vendor_webpack _dll folder.

The Webpack bundler enjoys great popularity. It includes a tool called DllPlugin. 4 It strangely gets its name from the dynamic link library concept Windows users are familiar with. The plugin works in two steps:

  1. You can create a versioned bundle with the shared dependencies. The plugin generates the JavaScript, which you can host statically, and a manifest file. Think of the manifest as the table of contents for the vendor bundle.

  2. You provide this manifest to the teams (e.g., via an NPM package). The team’s Webpack configuration reads that manifest, omits all listed vendor libraries from its own build, and adds references to the versioned libraries of the central vendor bundle.

Let’s look at the structure of the sample project in figure 11.5.

Figure 11.5 Folder structure of the Webpack DllPlugin example project. We’ve introduced a shared-vendor/ folder that sits beside the teams. It’s the project that generates the shared vendor bundles (static/) using the DllPlugin. It also includes the manifest_[x].json files for each version. The teams use Webpack for packaging their application code. You can find the configuration in webpack.config.js.

The shared-vendor/ folder contains the JavaScript and manifest code for versions 15 and 16.

Creating the versioned bundle

We’ll go through the essential pieces required to make this happen. Here is an excerpt from the vendor bundles package.json.

Listing 11.3 shared-vendor/package.json

{
  "name": "shared-vendor",
  "version": "16.12.0",
  "dependencies": {            
    "react": "^16.12.0",       
    "react-dom": "^16.12.0"    
  },                           
  ...
}

Specifying the dependencies and their version

Here is the Webpack code for generating JavaScript and manifest.

Listing 11.4 shared-vendor/webpack.config.js

const path = require("path");
const webpack = require("webpack");
 
module.exports = {
  ...
  entry: { react: ["react", "react-dom"] },               
  output: {                                               
    filename: "[name]_16.js",                             
    path: path.resolve(__dirname, "./static"),            
    library: "[name]_[hash]"                              
  },                                                      
  plugins: [
    new webpack.DllPlugin({                               
      context: __dirname,                                 
      name: "[name]_[hash]",                              
      path: path.resolve(__dirname, "manifest_16.json")   
    })                                                    
  ]
};

List of dependencies to include in the vendor bundle. Here one bundle called react gets created. It contains the code of react and react-dom.

Configuring location and name for the JavaScript code

Adding the DllPlugin and specifying where to write the manifest file

Using the versioned bundle

The teams must have access to the desired manifest at build time. Publishing the shared-vendor project as an NPM module is one option to do this. Here is the package.json for a team.

Listing 11.5 team-decide/package.json

{
  "name": "team-decide",
  "dependencies": {
    ...
    "react": "^16.12.0",                        
    "react-dom": "^16.12.0",                    
    "shared-vendor": "file:../shared-vendor"    
  },
  ...
}

Specifying the framework dependencies

Referencing the shared-vendor package. We use the file: syntax to make it happen locally. In the real project we would publish it as a properly named and versioned package like this: @the-tractor-store/[email protected].

The Webpack configuration of the team looks like this.

Listing 11.6 team-decide/webpack.config.js

const webpack = require("webpack");
const path = require("path");
 
module.exports = {
  entry: "./src/page.jsx",                                  
  output: {                                                 
    ...                                                     
    publicPath: "/static/",                                 
    filename: "decide.js"                                   
  },                                                        
  plugins: [
    new webpack.DllReferencePlugin({                        
      context: path.join(__dirname),                        
      manifest: require("shared-vendor/manifest_16.json"),  
      sourceType: "var"                                     
    })                                                      
  ]
  ...
};

Entry point of team decides application

Configuring where the generated files should go

Adding the DllReferencePlugin and pointing it to the manifest_[x].json of the shared-vendor package

This is a pretty standard Webpack configuration. Adding the DllReferencePlugin is the special part. It performs the magic of omitting the code of all vendor libraries specified in the manifest.json and replacing it with references to the central bundle.

Are you curious about the manifest’s content? Let’s have a look inside.

Listing 11.7 shared-vendor/manifest_16.json

{
  "name": "react_a00e3596104ad95690e8",           
  "content": {
    "./node_modules/react/index.js": {            
      "id": 0,                                    
      "buildMeta": { "providedExports": true }    
    },                                            
    "./node_modules/object-assign/index.js": {    
      "id": 1,                                    
      "buildMeta": { "providedExports": true }    
    },                                            
    ...
  }
}

Unique internal name. Ensures that different DLLs can exist on one page.

List of node modules that the bundle contains

The bundle also contains the dependencies of the dependencies.

The last step in our process is adjusting the script tags in the HTML to ensure that the bundles load in the correct order.

Listing 11.8 team-decide/index.html

<html>
  ...
  <body>
    <decide-product-page></decide-product-page>
    <script src="http://localhost:3000/static/react_15.js"></script>     
    <script src="http://localhost:3000/static/react_16.js"></script>     
    <script src="http://localhost:3001/static/decide.js" async></script>
    <script src="http://localhost:3002/static/inspire.js" async></script>
    <script src="http://localhost:3003/static/checkout.js" async></script>
  </body>
</html>

Including the bundle for both React versions

It’s crucial that the vendor bundles execute before the team’s code. Run the example locally (npm run 19_shared_vendor_webpack_dll) and look at the output at http://localhost:3001/product/fendt. Figure 11.6 shows the result.

 

You can see that the micro frontends run on different React versions.


Figure 11.6 Team Decide’s and Team Checkout’s micro frontends run on React 16. Team Inspire still uses version 15. The different versions can coexist on the same page.

The DllPlugin has some benefits compared to the “one global version” approach:

  • Safe way to globally provide different versions of the same library.

  • A vendor bundle can contain more than one library.

  • manifest.json is a machine-readable and distributable documentation of the vendor bundle.

  • Works in all browsers.

But there are some drawbacks:

  • No on-demand or dynamic loading of vendor assets. The vendor bundle has to be loaded before the application code that relies on it. The application code does not automatically pull in the vendor bundle it needs.

  • All teams must use Webpack. The vendor bundle uses Webpack’s internal module loading and referencing code.

NOTE At the time of writing this book, there’s a lot of work going on to improve Webpack’s code sharing abilities across projects. Webpack 5 will introduce a technique called module federation 5 that addresses many micro frontend requirements.

Let’s explore a third option that’s based on JavaScript’s new ES Modules standard.

Central ES modules (rollup.js)

Nowadays, relevant browsers (except Internet Explorer 11) 6 support JavaScript’s native modules system with the import/export syntax. This opens up new possibilities for sharing dependencies without needing a specific bundler.

TIP You can find the sample code for this in the 20_shared_vendor_rollup _absolute_imports folder.

Let’s take a quick look at the capabilities of the import mechanism. The spec calls the dependency string, a module specifier. Here is a list of different specifier types:

  • Relative path (starts with a dot)

    import Button from "./Button.js"

  • Absolute path (starts with a slash)

    import Button from "/my/project/Button.js"

  • Bare specifier (simple string)

    import React from "react"

  • URL (starts with a protocol)

    import React from "https://my.cdn/react.js"

TIP If you want to learn more about ES modules, I recommend this resource 7 as a starting point.

In this example, we’ll use the last option: the absolute URL. The concept of this example is the same as in the previous Webpack case:

  • We have a shared-vendor project that creates versioned bundles containing react and react-dom. But the files are now standard ES modules.

  • We adjust the team projects to reference the vendor bundle by using an absolute URL.

In production, the code that runs in the browser works like this.

Listing 11.9 shared-vendor/static/react_16.js

export default [...react implementation...];

Listing 11.10 team-decide/static/decide.js

import React from "http://localhost:3000/static/react_16.js";

The central React JavaScript file is in ES module format, and the teams point to it via a URL.

You could ship this code without using a bundler at all. For our example, we use rollup.js 8 to ship react and react-dom in one bundle file and build and optimize our team’s code for production. Rollup.js recognizes absolute URL dependencies (http://..) and leaves them untouched. This is something that isn’t yet possible with Webpack.

We won’t go through the full code but will highlight the significant parts. Figure 11.7 shows the folder structure of the sample code.

Figure 11.7 The shared-vendor project creates versioned bundles in ES module format using rollup.js. The other teams also use rollup.js

Creating the versioned bundle

Rollup’s configuration is straightforward. We define input and advise it to write the bundle as an ES module (esm) to the static/ folder.

Listing 11.11 shared-vendor/rollup.config.js

...
export default {
  input: "src/index.js",         
  output: {                      
    file: "static/react_16.js",  
    format: "esm"                
  },                             
  plugins: [...]
};

Input file specifies what should go into the vendor bundle

Output defines the target location of the bundle and sets its format

The src/index.js imports react and react-dom and exposes them as both a default and named export. This way, Rollup will create one bundle which contains both libraries.

Listing 11.12 shared-vendor/src/index.js

export { default } from "react";
export { default as ReactDOM } from "react-dom";

That’s everything we need to do to create the vendor bundle. As with the earlier example, the generated file will be available at http://localhost:3000/static/react_16.js. Let’s look at how we configure the team’s React applications to use this bundle.

Using the versioned bundle

The team’s rollup configuration is basically the same as the one we saw before: configuring input, output, and setting the format. It includes a few plugins to deal with JSX, Babel, and CSS, but these are all straight from the official documentation.

Listing 11.13 shared-vendor/src/index.js

export default {
  input: "src/page.jsx",
  output: {
    file: "static/decide.js",
    format: "esm"
  },
  plugins: [...]
};

Let’s have a look inside the input file src/page.jsx. To use the globally provided vendor bundle, we need to set our imports accordingly. In a traditional React application, you would use a bare specifier like this:

import React from "react"

The bundler then searches for react in your node_modules and includes it. In our case, we can specify the absolute URL:

import React from "http://localhost:3000/static/react_16.js";

Rollup.js will treat this as an external resource. Since all components in a React application need to import react, it’s a little cumbersome to always write the absolute URL to the versioned file. In the example, I’ve used Rollup’s alias feature 9 to configure this in a central place. This way, the application code can stay as is, and Rollup replaces all instances of react with the absolute URL on build.

The absolute URL approach has two significant benefits:

  1. It’s standards-based. Asset sharing is an architecture decision that affects all teams. Changing it later on in the project will produce a non-trivial amount of work. Relying on standards makes future changes in tooling or libraries much more manageable. Want to switch your bundler? No problem, as long as it supports ES modules.

  2. Dynamic loading of required vendor bundles. The DllPlugin requires you to load the vendor files before the application code synchronously. With ES modules, the application code requests the vendor bundle(s) it needs. If it’s already downloaded because another micro frontend requested the same module, it reuses the existing one.

The dynamic loading makes the integration code a lot simpler. Here is Team Decide’s HTML file.

Listing 11.14 team-decide/index.html

<html>
  ...
  <body>
    <decide-product-page></decide-product-page>
    <script src="http://localhost:3001/static/decide.js" type="module" async></script>       
    <script src="http://localhost:3002/static/      
inspire.js" type="module" async ></script>     
    <script src="http://localhost:3003/static/      
checkout.js" type="module" async ></script>    
  </body>
</html>

The HTML must only reference the JavaScript files from the teams. They download central bundles when needed.

 

Start the example locally by running npm run 20_shared_vendor_rollup_absolute _imports. In the first view, it looks exactly like the previous example. Two teams use React 16. One team is still on React 15. Opening up the developer tools shows a difference. In the Network tab, you see that the three application bundles load first (small parallel downloads). Then they request their associated vendor bundle (large parallel downloads). You can see this in figure 11.8. The Initiator column shows the team which first requested the bundle.

Figure 11.8 Different framework versions on one page by using ES modules. The Network tab shows which team initiated the download of a specific vendor bundle. Team Decide and Team Checkout both reference react_16.js. Team Checkout was the first to request it. Team Inspire references the react_15.js bundle.

Import-maps

In the previous example, we used Rollup’s alias plugin to make our life easier. It saved us the hassle of using an absolute URL in all files that require react. Let’s look at import-maps. Import-maps is a proposed web standard 10 that can simplify our loading process even further. It provides a declarative way to map bare specifiers to absolute URLs. An import-map looks like this:

<script type="importmap">                                  
  {
    "imports": {
      "vue": "https://my.cdn/[email protected]/vue.js",           
      "vue@next": "https://my.cdn/[email protected]/vue.js"   
    }
  }
</script>

Introduces the new script-type importmap

Maps the bare specifier vue to the current version of the framework

Maps the bare specifier vue@next to the upcoming version of the framework

The definitions from the import-map apply globally. Teams can reference the current version of Vue.js by importing vue. They don’t need to know the URL of the shared bundle. The following example illustrates that:

<!-- Team A -->
<script type="module">
  import Vue from "vue";
  console.log(Vue.version);
  // -> 2.6.10
</script>
 
<!-- Team B -->
<script type="module">
  import Vue from "vue@next";
  console.log(Vue.version);
  // -> 3.0.0-beta
</script>

More about import-maps

Import-maps are a promising solution but not an official standard yet. Right now, the preceding code only works in Chrome when you’ve activated a feature flag.

If you want to use them today, I recommend having a look at SystemJS. 11 SystemJS maintainer and single-spa developer Joel Denning has published a video series 12 on using import-maps and SystemJS with micro frontends.

Podium developer Trygve Li has written an introduction to using import-maps in a micro frontend context. 13 He also authored a rollup plugin 14 that works similarly to our alias approach but takes an import-map as an input.

11.2.5 Don’t share business code

Extracting large pieces of vendor code is a powerful technique. You’ve learned a couple of ways to achieve it. But you should be careful of what you extract.

It’s tempting to share snippets of code every team uses, like currency formatting, debugging functions, or API clients. But since this is business code and has a tendency to change over time, you should avoid that.

Having a similar piece of code in the codebase of multiple teams feels wasteful. However, sharing code creates coupling that you shouldn’t underestimate. Someone has to be responsible for maintaining it. Changes to shared code have to be well thought-out and appropriately documented. Don’t be afraid of copying and pasting snippets of code from other teams. It can save you a lot of hassle.

If you’re confident that it’s a good idea to share a specific piece of code with other teams, you should instead do it as an NPM package that teams include at build time. Try to avoid runtime dependencies. They increase complexity and make your application harder to test.

In the next chapter, we’ll talk about code that’s often shared in micro frontends projects: the design system.

Summary

  • Performance budgets are an excellent tool to foster performance discussions on a regular basis. They also form a shared baseline that all team members can agree upon.

  • Having some project-wide performance targets is valuable. If teams want to optimize further, they might pick different metrics because they work on different use cases. The performance requirements for the homepage are not the same as the requirements for the checkout process.

  • Measuring performance is tricky when micro frontends from multiple teams exist on one site. Having clear responsibilities helps. The owner of a page can also be responsible for the overall page performance. If another team’s micro frontend slows down the page, the page owner informs that team to fix the issue.

  • It makes sense to measure the performance characteristics of a micro frontend in isolation to detect regressions and anomalies.

  • Micro frontend teams have a narrower scope that they are responsible for. This makes it easier for them to optimize performance in the places where it has the most significant effect on the user.

  • The size of your JavaScript framework and the number of teams on a page have an impact on performance. Because teams have a smaller scope, it might be a viable solution to pick a lighter framework. This eliminates the need for vendor code centralization.

  • You can improve performance by extracting large libraries from the team’s application bundles and serving them from a central place.

  • Sharing assets introduces extra complexity and requires maintenance.

  • You should measure the real impact of redundant JavaScript code for your use case and target audience.

  • Forcing all teams to run the same version of a framework can become complicated for major version upgrades. Teams have to deploy in lock-step to avoid breaking the page.

  • Allowing teams to upgrade dependencies at their own pace is an important feature and can save a lot of discussion. You can achieve this by implementing versioned asset files that can work side-by-side. Use Webpack’s DllPlugin or native ES modules to implement this.

  • Only centralize generic vendor code. Sharing business code introduces coupling, reduces autonomy, and can lead to problems in the future.


1.S. Kraus, G. Steinacker, O. Wegner. “Teile und Herrsche: Kleine Systeme für große Architekturen,” OBJEKTspektrum 05/2013 (German), http://mng.bz/xWDg.

2.See Tim Kadlec, “Setting a performance budget,” Tim Kadlec, http://mng.bz/VgAW.

3.If you are building an Angular project, you should check out Manfred Steyer’s work on Angular, micro frontends, and reducing Angular bundle size at https://www.softwarearchitekt.at/blog/.

4.See https://webpack.js.org/plugins/dll-plugin/.

5.See Zack Jackson, “Webpack 5 Module Federation: A game-changer in JavaScript architecture,” inDepth.dev, http://mng.bz/Z285.

6.See https://caniuse.com/#feat=es6-module.

7.JavaScript for impatient programmers by Dr. Axel Rauschmayer, https://exploringjs.com/impatient-js/ch _modules.html.

8.See https://rollupjs.org/.

9.See http://mng.bz/RAYD.

10.See https://github.com/WICG/import-maps.

11.See http://mng.bz/2X89.

12.See “What are Microfrontends?” http://mng.bz/1zWy.

13.See http://mng.bz/PApg.

14.See http://mng.bz/JyXP.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.211.87