Chapter 3. Micro-Frontend Architectures and Challenges

A micro-frontend represents a business domain that is autonomous, independently deliverable, and owned by a team. The key takeaways in this description, which will be discussed later, are closely linked to the principles behind micro-frontends:

  • Business domain representation

  • Autonomous codebase

  • Independent deployment

  • Single-team ownership

Micro-frontends offer many opportunities. Choosing the right one depends on the project requirements, the organization structure, and the developer’s experience.

In these architectures, we face some specific challenges to success bound by similar questions, such as how we want to communicate between micro-frontends, how we want to route the user from one view to another, and, most importantly, how we identify the size of a micro-frontend.

In this chapter, we will cover the key decisions to make when we initiate a project with a micro-frontends architecture. We’ll then discuss some of the companies using micro-frontends in production and their approaches.

Micro-frontends Decisions Framework

There are different approaches for architecting a micro-frontends application. To choose the best approach for our project, we need to understand the context we’ll be operating in.

Some architectural decisions will need to be made upfront because they will direct future decisions, like how to define a micro-frontend, how to orchestrate the different views, how to compose the final view for the user, and how micro-frontends will communicate and share data.

These types of decisions are called the micro-frontends decisions framework. It is composed of four key areas:

  • defining what a micro-frontend is in your architecture

  • composing micro-frontends

  • routing micro-frontends

  • communicating between micro-frontends

Define Micro-frontends

Let’s start with the first key decision, which will have a heavy impact on the rest. We need to identify how we consider a micro-frontend from a technical point of view.

We can decide to have multiple micro-frontends in the same view or having only one micro-frontend per view (Figure 3-1).

Horizontal vs. vertical split
Figure 3-1. Horizontal vs. vertical split

With the horizontal split, multiple micro-frontends will be on the same view. Multiple teams will be responsible for parts of the view and will need to coordinate their efforts. This approach provides a greater flexibility considering we can even reuse some micro-frontends in different views, although it also requires more discipline and governance for not ending up with hundreds of micro-frontends in the same project.

In the vertical split scenario, each team is responsible for a business domain, like the authentication or the catalog experience. In this case, domain-driven design (DDD) comes to the rescue. It’s not often that we apply DDD principles on frontend architectures, but in this case, we have a good reason to explore it.

DDD is an approach to software development that centers the development on programming a domain model that has a rich understanding of the processes and rules of a domain.

DDD starts from the assumption that each software should reflect what the organization does and architectures should be designed based on domains and subdomains leveraging ubiquitous languages shared across the business.

Applying DDD for frontend is slightly different from what we can do on the backend, certain concepts are definitely not applicable although there are others that are fundamental for designing a successful micro-frontends architecture.

For instance, Netflix’s core domain is video streaming; the subdomains within that core domain are the catalogue, the sign-up functionality, and the video player.

There are three subdomain types:

  • Core subdomains: These are the main reasons an application should exist. Core subdomains should be treated as a premium citizen in our organizations because they are the ones that deliver value above everything else. The video catalog would be a core subdomain for Netflix.

  • Supporting subdomains: These subdomains are related to the core ones but are not key differentiators. They could support the core subdomains but aren’t essential for delivering real value to users. One example would be the voting system on Netflix’s videos.

  • Generic subdomains: These subdomains are used for completing the platform. Often companies decide to go with off-the-shelf software for them because they’re not strictly related to their domain. With Netflix, for instance, the payments management is not related to the core subdomain (the catalog), but it is a key part of the platform because it has access to the authenticated section.

Let’s break down Netflix with these categories (Table 3-1).

Table 3-1. Subdomains examples

Subdomain type Example

Core subdomain Catalog

Supportive subdomain Voting system

Generic subdomain Sign in or sign up

Domain-Driven Design with Micro-Frontends

Another important term in DDD is the bounded context: a logical boundary that hides the implementation details, exposing an application programming interface (API) contract to consume data from the model present in it.

Usually, the bounded context translates the business areas defined by domains and subdomains into logical areas where we define the model, our code structure, and potentially, our teams. Bounded context defines the way different contexts are communicating with each other by creating a contract between them, often represented by APIs. This allows teams to work simultaneously on different subdomains while respecting the contract defined upfront.

Often in a new project, subdomains overlap bounded context because we have the freedom to design our system in the best way possible. Therefore, we can assign a specific subdomain to a team for delivering a certain business value defining the contract. However, in legacy software, the bounded context can accommodate multiple subdomains because often the design of those systems was not thought of with DDD in mind.

The micro-frontends ecosystem offers many technical approaches. Some implementations are done with iframes, while others are done with components library or web components. Too often we spend our time identifying a technical solution without taking the business side into consideration.

Think about this scenario: three teams, distributed in three different locations, working on the same codebase.

These teams may go for a horizontal split using iframes or web components for their micro-frontends. After a while, they realized that micro-frontends in the same view need to communicate somehow. One of those teams will then be responsible for aggregating the different parts inside the view. The team will spend more time aggregating different micro-frontends in the same view and debugging to make sure everything works properly.

Obviously, this is an oversimplification. It could be worse when taking into con‐ consideration the different time zones, cross-dependencies between teams, knowledge sharing, or distributed team structure.

All those challenges could escalate very easily to low morale and frustration on top of delivery delays. Therefore we need to be sure the path we are taking won’t let our teams down.

Approaching the project from a business point of view, however, allows you to create an independent micro-frontend with less need to communicate across multiple subdomains.

Let’s re-imagine our scenario. Instead of working with components and iframes, we are working with single page applications (SPAs) and single pages.

This approach allows a full team to design all the APIs needed to compose a view and to create the infrastructure needed to scale the services according to the traffic. The combination of micro-architectures, microservices, and micro-frontends provides independent delivery without high risks for compromising the entire system for release in production.

The bounded context helps design our systems, but we need to have a good understanding of how the business works to identify the right boundaries inside

our project.

As architects or tech leads, our role is to invest enough time with the product team or the customers so we can identify the different domains and subdomains, working collaboratively with them.

After defining all the bounded contexts, we will have a map of our system representing the different areas that our system is composed of. In Figure 3-2 we can see a representation of bounded context. In this example the bounded context contains the catalogue micro-frontends that consumes APIs from a microservices architecture via a unique entry point, a backend for frontend, we will investigate more about the APIs integration in chapter 9.

In DDD, the frontend is not taken into consideration but when we work with micro-frontends with a vertical split we can easily map the frontend and the backend together inside the same bounded context.

This is a representation of bounded context
Figure 3-2. This is a representation of bounded context

I’ve often seen companies design systems based on their team’s structure (Conway’s Law states “organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.” Instead, they needed their team structure to be flexible enough to adapt to the best possible solution for the organization in order to reduce friction and move faster toward the final goal: having a great product that satisfies customers (Inverse Conway’s Maneuver recommends evolving your team and organizational structure to promote your desired architecture.)!

How to define a bounded context

Premature optimization is always around the corner, which can lead to our subdomains decomposing where we split our bounded contexts to accommodate future integrations. Instead, we need to wait until we have enough information to make an educated decision.

Because our business evolves over time, we also need to review our decisions related to bounded contexts.

Sometimes we start with a larger bounded context. Over time the business evolves and eventually, the bounded context becomes unmanageable or too complex. So we decide to split it.

Deciding to split a bounded context could result in a large code refactor but could also simplify the codebase drastically, speeding up new functionalities and development in the future.

To avoid premature decomposition, we will make the decision at the last possible moment. This way we have more information and clarity on which direction we need to follow. We must engage upfront with the product team or the domain experts inside our organization as we define the subdomains. They can provide you with the context of where the system operates. Always begin with data and metrics.

For instance, we can easily find out how our users are interacting with our application and what the user journey is when a user is authenticated and when they’re not. Data provides powerful clarity when identifying a subdomain and can help create an initial baseline, from where we can see if we are improving the system or not.

If there isn’t much observability inside our system, let’s invest time to create it. Doing so will pay off the moment we start identifying our micro-frontends.

Without dashboards and metrics, we are blind to how our users operate inside our applications.

Let’s assume we see a huge amount of traffic on the landing page, with 70% of those users moving to the authentication journey (sign in, sign up, payment, etc.). From here, only 40% of the traffic subscribes to a service or uses their credentials for accessing the service.

These are good indications about our users’ behaviors in our platform. Following DDD, we would start from our application’s domain model, identifying the subdomains and their related bounded context and using behavioral data to guide us on how to slice the frontend applications.

Allowing users to download only the code related to the landing page will give them a faster experience because they won’t have to download the entire application immediately, and the 40% of users who won’t move forward to the authentication area will have just enough code downloaded for understanding our service.

Obviously, mobile devices with slow connections only benefit from this approach for multiple reasons: less data is downloaded, less memory is used, less JavaScript is parsed and executed, resulting in a faster first interaction of the page.

It’s important to remember that not all user sessions contain all the URLs exposed by our platform. Therefore a bit of research upfront will help us provide a better user experience.

Usually, the decision to pick horizontal instead of vertical depends on the type of project we have to build.

In fact, horizontal split serves better static pages like catalogs or e-commerce, instead of a more interactive project that would require a vertical split.

Another thing to be considered is the skills set of our teams, usually, a vertical split suits better for a more traditional client-side development experience, instead, the horizontal split requires an investment upfront for creating a solid and fast development experience to test their part as well trying inside the overall view.

Micro-frontends composition

There are different approaches for composing a micro-frontends application (Figure 3-3).

Micro frontends composition diagram
Figure 3-3. Micro-frontends composition diagram

In this diagram we can see three different ways to compose a micro-frontends architecture:

  • Client-side composition

  • Edge-side composition

  • Server-side composition

Starting from the left of our diagram, we have a client-side composition, where an application shell loads multiple micro-frontends directly from a content delivery network (CDN), or from the origin if the micro-frontend is not yet cached at the CDN level. In the middle of the diagram, we compose the final view at the CDN level, retrieving our micro-frontends from the origin and delivering the final result to the client. The right side of the diagram shows a micro-frontends composition at the origin level where our micro-frontends are composed inside a view, cached at the CDN level, and finally served to the client.

Let’s now see how we can technically implement this architecture.

Client-Side Composition

In the client-side composition case, where an application shell loads micro-frontends inside itself, the micro-frontends should have a JavaScript or HTML file as an entry point so the application shell can dynamically append the document object model (DOM) nodes in the case of an HTML file or initializing the JavaScript application with a JavaScript file.

We can also use a combination of iframes to load different micro-frontends, or we could use a transclusion mechanism on the client side via a technique called client-side include. Client-side include lazy-loads components, substituting empty placeholder tags with complex components. For example, a library called h-include uses placeholder tags that will create an AJAX request to a URL and replace the inner HTML of the element with the response of the request.

This approach gives us many options, but using client side includes has a different effect than using iframes. In the next chapters we will explore this part in detail.

Note

According to Wikipedia, in computer science, transclusion is the inclusion of part or all of an electronic document into one or more other documents by hypertext reference. Transclusion is usually performed when the referencing document is displayed and is normally automatic and transparent to the end user. The result of transclusion is a single integrated document made of parts assembled dynamically from separate sources, possibly stored on different computers in disparate places.

An example of transclusion is the placement of images in HTML. The server asks the client to load a resource at a particular location and insert it into a particular part of the DOM

Edge-Side Composition

With edge-side composition, we assemble the view at the CDN level. Many CDN providers give us the option of using an XML-based markup language called Edge Side Include (ESI). ESI is not a new language; it was proposed as a standard by Akamai and Oracle, among others, in 2001. ESI allows a web infrastructure to be scaled in order to exploit the large number of points of presence around the world provided by a CDN network, compared to the limited amount of data center capacity on which most software is normally hosted. One drawback to ESI is that it’s not implemented in the same way by each CDN provider; therefore, a multi-CDN strategy, as well as porting our code from one provider to another, could result in a lot of refactors and potentially new logic to implement.

Server-Side Composition

The last possibility we have is the server-side composition, which could happen at runtime or at compile time. In this case, the origin server is composing the view by retrieving all the different micro-frontends and assembling the final page. If the page is highly cacheable, the CDN will then serve it with a long time-to-live policy. However, if the page is personalized per user, serious consideration will be required regarding the scalability of the eventual solution, when there are many requests coming from different clients. When we decide to use server-side composition we must deeply analyze the use cases we have in our application. If we decide to have a runtime composition, we must have a clear scalability strategy for our servers in order to avoid downtime for our users.

From these possibilities, we need to choose the technique that is most suitable for our project and the team structure. As we will learn later on in this journey, we also have the opportunity to deploy an architecture that exploits both client-side and edge-side composition—that’s absolutely fine as long we understand how to structure our project.

Routing micro-frontends

The next important choice we have is how to route the application views.

This decision is strictly linked to the micro-frontends composition mechanism we intend to use for the project.

We can decide to route the page requests in the origin, on the edge, or at client-side (Figure 3-4.4).

4   Micro frontends routing diagram
Figure 3-4. 4 - Micro-frontends routing diagram

When we decide to compose micro-frontends at the origin, therefore a server-side composition on the right of Figure 3-4.4, we are forced to route the requests at origin considering the entire application logic lives in the application servers.

However, we need to consider that scaling an infrastructure could be non trivial, especially when we have to manage burst traffic with many requests per second (RPS). Our servers need to be able to keep up with all the requests and scale horizontally very rapidly. Each application server then must be able to retrieve the micro-frontends for the composing page to be served.

We can mitigate this problem with the help of a CDN. The main downside is that when we have dynamic or personalized data, we won’t be able to rely extensively on the CDN serving our pages because the data would be outdated or not personalized.

When we decide to use edge-side composition in our architecture, the routing is based on the page URL and the CDN serves the page requested by assembling the micro-frontends via transclusion at edge level.

In this case, we won’t have much room for creating smart routing—something to remember when we pick this architecture.

The final option is to use client-side routing. In this instance, we will load our micro-frontends according to the user state, such as loading the authenticated area of the application when the user is already authenticated or loading just a landing page if the user is accessing our application for the first time.

If we use an application shell that loads a micro-frontend as an SPA, the application shell is responsible for owning the routing logic, which means the application shell retrieves the routing configuration first and then decides which micro-frontend to load.

This is a perfect approach when we have complex routing, such as when our micro-frontends are based on authentication, geo-localization, or any other sophisticated logic. When we are using a multipage website, micro-frontends may be loaded via client-side transclusion. There is almost no routing logic that applies to this kind of architecture because the client relies completely on the URL typed by the user in the browser or the hyperlink chosen in another page, similar to what we have when we use edge-side include approach.

We won’t have any scalability issue in either case. The client-side routing is highly recommended when your teams have stronger frontend skills so that it becomes natural having a client-side routing over a backend configuration.

Those routing approaches are not mutually exclusive, either. As we will see later in this book, we can combine those approaches using CDN and origin or client-side and CDN together.

The important thing is determining how we want to route our application. This fundamental decision will affect how we develop our micro-frontends application.

Micro-frontends communication

In an ideal world, micro-frontends wouldn’t need to communicate with each other because all of them would be self-sufficient. In reality, it’s not always possible to notify other micro-frontends about a user interaction, especially when we work with multiple micro-frontends on the same page.

When we have multiple micro-frontends on the same page, the complexity of managing a consistent, coherent user interface for our users may not be trivial. This is also true when we want communication between micro-frontends owned by different teams. Bear in mind that each micro-frontend should be unaware of the others on the same page, otherwise we are breaking the principle of independent deployment.

In this case, we have a few options for notifying other micro-frontends that an event occurred. We can inject an eventbus, a mechanism that allows decouple components to communicate with each other via events sent via a bus,in each micro-frontend and notify the event to every micro-frontend. If some of them are interested in the event dispatched, they can listen and react to it (Figure 3-5).

Event emitter and custom events diagram
Figure 3-5. Event emitter and custom events diagram

To inject the eventbus, we need the micro-frontends container to instantiate the eventbus and inject it inside all of the page’s micro-frontends.

Another solution is to use Custom Events. These are normal events but with a custom body, in this way we can define the string that identifies the event and an optional object custom for the event. Following an example

new CustomEvent('myCustomEvent', { someObj: “customData” });

The custom events should be dispatched via an object available to all the micro-frontends, such as the window object, the representation of a window in a browser. If you decide to implement your micro-frontends with iframes, using an eventbus would allow you to avoid challenges like which window object to use from inside the iframe, because each iframe has its own window object. No matter whether we have a horizontal or a vertical split of our micro-frontends, we need to decide how to pass data between views.

Imagine we have one micro-frontend for signing in a user and another for authenticating the user on our platform. After being successfully authenticated, the sign-in micro-frontend has to pass a token to the authenticated area of our platform. How can we pass the token from one micro-frontend to another? We have several options.

We can use a web-storage-like session, local storage, or cookies (Figure 3-6). In this situation, we might use the local storage for storing and retrieving the token independently. The micro-frontend is loaded because the web storage is always available and accessible, as long as the micro-frontends live in the same subdomain.

Sharing data between micro frontends in different views
Figure 3-6. Sharing data between micro-frontends in different views

Another option could be to pass some data via query strings - for example, www.acme.com/products/details?id=123 the text after the question mark represents the query string, in this case the ID 123 of a specific product selected by the user - and retrieves the full details to display via an API (Figure 3-7). Using query strings is not the most secure way to pass sensitive data, such as passwords and user IDs, however. There are better ways to retrieve that information if it’s passed via the HTTPS protocol. Embrace this solution carefully.

Micro frontends communication via query strings or URL
Figure 3-7. Micro-frontends communication via query strings or URL

To summarize, the micro-frontends decisions framework is composed of four key decisions: identifying, composing, routing, and communicating.

In this table you can find all the combinations available based on how you identify a micro-frontend.

Table 3-2. Micro-frontends decisions framework summary

Micro-frontends definition Composition Routing Communication

Horizontal Client-side

Server-side Edge-side

Vertical Client-side

Server-side

Client-side Server-side Edge-side

Client-side Server-side Edge-side

Event emitter Custom events Web storage Query strings

Web storage Query strings

Micro-Frontends in Practice

Although micro-frontends are a fairly new approach in the frontend architecture ecosystem, they have been used for a few years at medium and large organizations. and many well-known companies have made micro-frontends their main system for scaling their business to the next level.

Zalando

The first one worth mentioning is Zalando, the European fashion and e-commerce company. I attended a conference presentation made by their technical leads, and I have to admit I was very impressed by what they have created and released open source.

More recently, Zalando has replaced the well-known OSS project called Tailor.js with Interface Framework. Interface Framework is based on similar concepts to Tailor.js but is more focused on components and GraphQL instead of Fragments.

HelloFresh

HelloFresh, a digital service that provides ready-to-cook food boxes with a variety of recipes from all over the world, is another good example.

Inspired by Zalando’s work, HelloFresh is now serving a multitude of SPAs orchestrated by URL.

In an interesting approach to flexibility of components, the SPAs are assembled and rendered on the servers, then cached at the CDN level, providing flexibility for generating the SPAs.

This approach also allows the development teams to be responsible for their own technology stacks; each SPA could have a different one, and each team is fully independent from the others.

AllegroTech

In 2016, Polish e-tailer and auction site AllegroTech came up with OpBox, a project that allows nontechnical people to merge UI representations (a.k.a., components) with data sources inside the same page.

At first, AllegroTech tried to work with multiple components assembled at runtime with ESI lang, but the system didn’t provide the desired level of consistency. Furthermore, they had a few problems with managing specific library versions. For instance, one component could have been developed with React v13 and another one with v15, both rendered on the same page.

In the OpBox project, Allegro’s teams had the opportunity to decouple the rendering part of a component (the view) from the data in order to render. As long as the contract between the component and the data source matched, they were able to assemble data and different components together, which enhanced their ability to do A/B testing and gather data from there.

But it’s the additional abstraction between how the page is composed and the components to display that really stands out in this implementation. In fact, a JSON file describes the page and the components needed, and the renderer then composes

the page as configured inside the JSON file.

Obviously two or more components on the same page could also react to a specific user interaction or to a change in a set of data, thanks to an eventbus implementation that signals the change to all the components that are listening to it.

Spotify

In this list of case histories, I can’t neglect to mention Spotify.

For its desktop application, Spotify has assembled multiple components living in separate iframes that communicate via a “bridge” for the low-level implementation made with C++.

If we inspect the Spotify desktop application, we can easily find the multiple parts composing this application. Each single .spa file is composed by an html file, multiple css files, a manifest.json, **and a JavaScript bundle file minimized and optimized (Figure 2-8).

https://docs.google.com/drawings/u/1/d/srOqMCJWnBIaT2VKsyRwo7w/image?w=481&h=177&rev=1&ac=1&parent=1oLWU66mMvCW-van-37Sb-SEZqx9nJLgz

Figure 2-8. Spotify micro-frontend artifact

Those files will be loaded inside an iframe to compose the final application UI.

This approach was used at the beginning for the web instance of the Spotify player, but it was abandoned due to its poor performance, and Spotify has since moved back to an SPA architecture similar to what they have for the TV application. This doesn’t mean the approach can’t work, but the way it was designed caused more issues for the final users than benefits.

SAP

Another company that is using iframes for its applications is SAP. SAP released luigi framework, a micro-frontends framework used for creating an enterprise application that interacts with SAP. Luigi works with Angular, React, Vue, and SAPUI—basically the most modern and well-adopted frontend frameworks, plus a well-known one, like SAPUI, for delivering applications interacting with SAP. Since enterprise applications are B2B solutions, where SEO and bandwidth are not a problem, having the ability to choose the hardware and software specifications where an application runs makes iframes adoption easy. If we think of the memory management provided by the iframes out of the box, the decision to use them makes a lot of sense for that specific context.

OpenTable

Another interesting approach is OpenTable’s Open Components project, embraced by Skyscanner and other large organizations and released open source.

Open Components are using a really interesting approach to micro-frontends: a registry similar to the Docker registry gathers all the available components encapsulating the data and UI, exposing an HTML fragment that can then be encapsulated in any HTML template.

A project using this technique receives many benefits, such as the team’s independence, the rapid composition of multiple pages by reusing components built by other teams, and the option of rendering a component on the server or on the client.

When I have spoken with people who work at OpenTable, they told me that this project allowed them to scale their teams around the world without creating a large communication overhead. For instance, using micro-frontends allowed them to smooth the process by repurposing parts developed in the United States for use in Australia—definitely a huge competitive advantage.

DAZN

Last but not least is DAZN, a live and video-on-demand sports platform that uses a combination of SPAs and components orchestrated by a client-side agent called boot‐ strap.

DAZN’s approach focuses on targeting not only the web but also multiple smart TVs, set-top boxes, and consoles.

Its approach is fully client side, with an orchestrator always available during the navigation of the video platform to load different SPAs at runtime when there is a change of business domain.

These are just some of the possibilities micro-frontends offer for scaling up our co-located and/or distributed teams. More and more companies are embracing this paradigm, including New Relic, Starbucks, and Microsoft.

Summary

In this chapter we discovered the different high-level architectures for designing micro-frontends applications. We dived deep into the key decisions to make: define, compose, orchestrate and communicate.

Finally, we discovered that many organizations are already embracing this architecture in production, with successful software not merely available inside the browsers but also in other end uses, like desktop applications, consoles, and smart TVs.

It’s fascinating how quickly this architecture has spread across the globe. In the next chapter I will discuss how to technically develop micro-frontends, providing real examples you can use within your own projects.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.237.65.102