Chapter 12. Building and supporting older browsers

This chapter covers

  • Module bundling with Rollup
  • Transpiling with Babel to allow Web Components in IE11
  • Running and combining scripts with npm and package.json
  • Using dev dependencies in package.json
  • Ponyfilling CSS vars for IE11

In the last chapter, we finished building a reusable color picker component consisting of a few different custom components itself. It works pretty well, but the question now is whether this component works for all your target users. It certainly could, and we might stop here. The component we’ve built supports Chrome, Firefox, and Safari. This leaves only one modern browser left: Microsoft Edge.

As of now in this book, we’ve covered nearly every Web Component concept possible. Our learnings took us from creating Web Components with just Custom Elements to then capping everything off with the amazing Shadow DOM.

There is a good reason that we’ve tackled things in this order, and that reason is because there will be situations where you just can’t or don’t want to use the Shadow DOM. I’m excited to say that these situations are becoming increasingly rare! The end of 2018 brought us some great news on that front. Web Components landed in Firefox, which makes Edge the only major browser we’re waiting patiently for. We knew that the Microsoft Edge team was busy working on Web Component support, but then in April 2019, the team released a developer preview of a Chromium-based Edge. Browser diversity worries aside, this looks like great news for Web Components because this new version of Edge supports Web Components in the same way Chrome does (no worrying about weird things that Microsoft implemented in a slightly different way).

The big picture here is that there are currently two major browsers that don’t support Web Components natively: pre-Chromium Edge and IE11. For some lucky web developers, these browsers just don’t matter. For IE11, this is because on non-Windows 10 machines, it’s already reached its end of life. On newer Windows 10 machines, Microsoft recommends that folks use Edge, despite having IE11 available. For Edge, it’s now easy to assume that it’s only a matter of waiting a few months before most normal consumers have a browser version available that has identical capabilities to Chrome.

Not all of us are that lucky, however. IE11 continues to be a thorn in the side of many web developers. Pre-Chromium Edge could also continue to exist for a while as users slowly upgrade.

Whatever the reason, when creating components, it’s good to have a plan of action to take these issues on. So, in this chapter, we’re going to take the real-world UI component from the last chapter and back up a bit to get it working on Edge with a polyfill and some small changes. Finally, we’ll talk about specific build tools to get our component working in IE11.

12.1. Backward compatibility

So, do you wait for support? While a Chromium-based Edge developer preview is available today, how long before it’s released to everyone using Windows? How long will it take for current users to upgrade to the latest? Right now, these questions don’t have good answers, so it’s worth talking strategy to make the color picker from the last chapter work for the current version of Edge. This strategy will also take us most of the way to supporting IE11 if you absolutely need to support that browser. For IE11, there is a build/transpile step, but, for now, let’s focus on a hypothetical modern browser that doesn’t support Web Components.

One great resource to help with this effort is the various polyfills provided at www.webcomponents.org/polyfills. To be honest, though, I’m not so much a fan of polyfilling the Shadow DOM. It’s a bit too much like magic, meaning it does a good number of things behind the scenes that it doesn’t make you aware of, like copying and rewriting your component’s DOM elements with different unique classes. This would be fine if the polyfill handled everything seamlessly and had no limitations. The reality is that even when using the Shadow DOM polyfills, you really have to be aware of the limitations that come when the Shadow DOM isn’t natively available and work around them. With this in mind, we can make a few changes to toggle on and off the Shadow DOM for our component that will make it compatible with Edge without polyfilling this specific feature.

Despite avoiding the Shadow DOM polyfill, the first step is to polyfill for another aspect of Web Components: Custom Elements. This polyfill is, in fact, drop-in. When we add the polyfill to our component, we don’t have to worry about caveats or unsupported features. Custom Elements will just work in those browsers that don’t support them yet.

The polyfill can be found at https://github.com/webcomponents/custom-elements. As per the documentation, you can build it yourself, install from NPM, or, as we will do now, just use it from a content delivery network (CDN). To be thorough, we should add the polyfill to all three of our demo.html files so that they all work. Simply add the script link to each—in the index.html demo, for example:

<title>Color Picker Component</title>
<link rel="stylesheet" type="text/css" href="vars.css">
<script type="module" src="components/colorpicker/colorpicker.js"></script>
<script src="https://unpkg.com/@webcomponents/custom-elements"></script>

12.1.1. Toggling the Shadow DOM

If the Shadow DOM was used but is then turned off, one of the great things that happens is that the shadow root isn’t created; instead, you can fall back to the scope of your component (this). It works really well because the shadowRoot property can be interacted with in the same ways as your component. This means that in terms of using JS to interact with either, none of your code needs to change if you use a simple property to represent either scope interchangeably.

The major exception here is something we’ve covered before. This exception is the use of the constructor to do the heavy initialization work. Remember that when using the Shadow DOM, you’re creating a separate, mini DOM inside your component. So, given that you’re creating it right there in the constructor, this mini DOM is instantly available. When not using the Shadow DOM, you’re relying on the DOM provided by the HTML page you’re in. Access to this DOM isn’t available yet in the constructor function, so the connectedCallback function is the best place to put DOM interaction like getting/setting attributes and setting the component’s innerHTML.

Before we get into the workaround, chances are you’re developing with Chrome, Firefox, or Safari. Instead of jumping over to Edge to test things where Web Components aren’t supported, you can do the bulk of the work in your favorite browser by creating a toggle on the class that turns the Shadow DOM on and off. This will simulate Edge pretty well, and you can just do proper testing in that browser when you’re done.

Using the slider component as a starting example, we’ll add a static getter to control whether we opt in to the Shadow DOM:

export default class Slider extends HTMLElement {
   static get USE_SHADOWDOM_WHEN_AVAILABLE() { return false; }

We’ll do this in components/slider/slider.js, as well as in the other two components found in components/coordinatepicker/coordinatepicker.js and components/colorpicker/colorpicker.js.

With this toggle in, we can now turn our attention to the constructor. Remember, we can’t interact with the DOM here if we’re not using the Shadow DOM, so we’ll move some things around. Listing 12.1 shows what we started with, and listing 12.2 shows how it can be changed for toggling the DOM off and on.

Listing 12.1. Slider component before allowing Shadow DOM toggling
constructor() {
   super();
   this.attachShadow({mode: 'open'});                                  1
   this.shadowRoot.innerHTML = Template.render();                      2
   this.dom = Template.mapDOM(this.shadowRoot);

   document.addEventListener('mousemove', e => this.eventHandler(e));  3
   document.addEventListener('mouseup', e => this.eventHandler(e));
   this.addEventListener('mousedown', e => this.eventHandler(e));
}

  • 1 Creates a shadow root
  • 2 Renders the HTML/CSS to innerHTML
  • 3 Event listeners

To change this, we can move some code to the constructor and create (or not create) the Shadow DOM.

Listing 12.2. Enabling a Shadow DOM toggle
constructor() {
   super();

   if (Slider.USE_SHADOWDOM_WHEN_AVAILABLE &&                            1
       this.attachShadow) {
       this.root = this.attachShadow({mode: 'open'});
   } else {
       this.root = this;
   }

   document.addEventListener('mousemove',                                2
       e => this.eventHandler(e));
   document.addEventListener('mouseup', e => this.eventHandler(e));
   this.addEventListener('mousedown', e => this.eventHandler(e));
}

connectedCallback() {
   if (!this.initialized) {                                              3
       this.root.innerHTML = Template.render({                           4
           useShadowDOM: Slider.USE_SHADOWDOM_WHEN_AVAILABLE &&
       this.attachShadow });
       this.dom = Template.mapDOM(this.root);
       this.initialized = true;
       if (this.backgroundcolor) {                                       5
           this.setColor(this.backgroundcolor);
       }
       if (this.value) {
           this.refreshSlider(this.value);
       }
   }
}

  • 1 If opted into using the Shadow DOM and it’s supported, creates a shadow root; otherwise sets the reference to the component (this)
  • 2 Event listeners don’t need to move because both the component and document are available from the constructor.
  • 3 connectedCallback could happen multiple times (whenever the component is added to the page), so make sure initialization happens only once.
  • 4 Indicates to the HTML/CSS Template module whether the Shadow DOM is being used
  • 5 Updates component based on current attributes

The very first thing we’re doing here is creating a property on the class called this.root. If we use the Shadow DOM, set this property to the shadow root. If not, simply set it as a reference to our component (this). Now, we can use this.root anywhere we need to manipulate the contents of our component, whether we’re using the Shadow DOM or not.

We don’t actually need to move the event listeners. We would if they were more specific. If, for example, we created an event listener on the thumbnail or some element that’s not in the DOM yet, it wouldn’t work here. In this example, it just so happens that the things we’re listening to—the document and the component itself—are both available from the start.

The initialization code is moved to a new connectedCallback function, but remember, this handler is fired each time the component is added to the page. To make a truly bulletproof component, we should check if it’s already been initialized with a custom this.initialized property, running code only if it hasn’t been run yet. For our immediate needs with the color picker, we really don’t need this check, but, again, if we want to make components that work in a variety of situations, this really should be prioritized.

Working with our Template import module is pretty straightforward. Instead of setting the shadowRoot.innerHTML to the HTML/CSS string returned from the import, we simply set this.root.innerHTML. Whether this.root is the shadow root or the component, it will work regardless. Similarly, when getting cached element references with Template.mapDOM, this.root works regardless of which reference it contains.

Lastly, we have to add one extra bit around our attributes. The reflection (attributes/getters/setters) strategy doesn’t change, but there is a timing issue here. When we were using the Shadow DOM, we could initialize everything, including rendering all our HTML, getting element references, and so on, all in the constructor. By the time the attributeChangedCallback fired with our starting attributes, we’d be set up and ready to go. Now, however, the attributeChangedCallback fires before the connectedCallback handler, so our changes are lost without the ability to respond.

In fact, we do need to error-proof the attributeChangedCallback. Worse than losing these changes, we’ll actually get an error. Since this callback causes code to run that changes the thumbnail and background, both of which don’t exist yet, the following line, for example, will throw an error when the component starts up:

refreshSlider(value) {
    this.dom.thumb.style.left = (value/100 * this.offsetWidth -
     this.dom.thumb.offsetWidth/2) + 'px';
}

To take care of this issue, we can simply check if the component has been set up yet in the attributeChangedCallback and exit out if not:

attributeChangedCallback(name, oldVal, newValue) {
   if (!this.dom) { return; }

But then, of course, our component’s starting attributes have not been used due to this timing issue, so we checked if they were present and acted on them in the last few lines of listing 12.2.

Though we’ve just focused on the slider component, the other two components can be modified in the exact same way. I won’t spell it all out here, but it’s a good exercise to fill these in on your own. If you get stuck, those components in their finished form can be found in this book’s GitHub repo.

That said, there is one tiny consideration to make in the color picker component’s specific implementation. I’m referring to the onMutationChange handler in components/colorpicker/colorpicker.js:

onMutationChange(records) {
  records.forEach( rec => {
      this.data = Handlers.update({

Here, we are handling any attribute changes to our inner DOM elements. Initially, we were watching for attribute changes on the shadowRoot and any elements within. Now, we’re just listening for changes on this.root. When not using the Shadow DOM, we’re observing attribute changes on the component itself! The problem is that we’re already doing this using the attributeChangedCallback. So now we’re double listening and double responding to events. To solve this, we’ll simply ignore attribute changes coming from the component inside the onMutationChange handler:

onMutationChange(records) {
  records.forEach( rec => {
      if (rec.target !== this) {

Here, we’re simply saying that if the target element identified by the change record (each record is a change recorded by the mutation observer) is not the color picker component, do all the normal stuff. If the target element is the color picker component, no action is taken.

12.1.2. Comparing to polyfills

While it wasn’t overly complex to allow the components to operate without the Shadow DOM, it wasn’t trivial. We couldn’t just drop in a polyfill and go. In fact, the only thing a Shadow DOM polyfill would have really given us here is the ability to keep using this.shadowRoot in the component. It would also offer some encapsulation to prevent outside JS from manipulating the component’s DOM like a real Shadow DOM would do. If that’s important to you, the ShadyDOM polyfill might be worth looking into (https://github.com/webcomponents/shadydom).

The rest of the work we did, especially around breaking up the constructor to move the initialization to the connectedCallback, is something that would need to be done regardless. This one aspect is likely why the W3C spec recommends not having initialization code like this in the constructor at all (even when everyone else seems to ignore this rule). It’s much easier to set things up right from the beginning and switch off the Shadow DOM if necessary. It’s not a concern if your target browsers support Web Components natively, but when they don’t, it’s good to start your component with these best practices in mind.

12.1.3. Shadow CSS and child elements

Likely the most annoying part of moving back to a world without the Shadow DOM is HTML and CSS. In creating our HTML markup, I was overly enthusiastic and used IDs instead of classes to reference elements. Again, a polyfill won’t save us here. Using the slider component’s template as an example (components/slider/template.js), we simply need to go in and kill all the IDs. The following listing highlights this change.

Listing 12.3. Changing ID references to classes
mapDOM(scope) {
   return {
       // OLD //
       overlay: scope.getElementById(              1
       'bg-overlay'),
       thumb: scope.getElementById('thumb')

       // NEW //
       overlay: scope.querySelector(               2
       '.bg-overlay'),
       thumb: scope.querySelector('.thumb')
   }
},
html() {
   // OLD //
   return `<div id="bg-overlay"></div>             3
           <div id="thumb"></div>`;
   // NEW //
   return `<div class="bg-overlay"></div>          4
   <div class="thumb"></div>`;
},

  • 1 With the Shadow DOM, we could safely query by ID.
  • 2 Without the Shadow DOM, it’s not safe anymore, so we should switch to classes.
  • 3 Elements used ID to reference previously
  • 4 Change to use classes if no Shadow DOM is present

For the limited context of our component on a demo page, we don’t actually need this step. It just so happens that none of the IDs we were using clashed—they were all unique. So, if this step was missed, it’s no big deal; everything would work fine. The problem is that if we kept referencing by ID and forgot about it, we’d have a ticking time bomb on our hands. Using this component in a larger application with other ID references could overrule what element gets returned here if more than one is using the same ID and could have some serious (and mysteriously acting) consequences.

Yet again, we have an instance of a best practice we need to worry about only if we’re planning to use our components in a Shadow DOM-less context. If this is a possibility, it’s just best to avoid ID altogether. If it’s not a possibility—well, frankly, I really do enjoy the luxury of using ID as it was intended: to reference unique elements!

The last hurdle to overcome is CSS. The ShadyCSS polyfill does help here, but it comes with lots of baggage, to the point where I just don’t feel like it’s worth it for cases like this. The problem is that the :host selector doesn’t exist. In fact, in Edge, it actually breaks your CSS if you even try to use it! Also, simple standalone selectors like .thumb that worked only on the scope of your Shadow DOM before can now affect your entire application.

The ShadyCSS polyfill works around this in the best way it can. You as a developer are responsible for putting your markup and CSS in a <template> tag. The polyfill then goes in and rewrites your elements and CSS to use unique selectors such that it appears the Shadow DOM still works. I’m inclined to think that the setup required here is the same or even more effort than just handling things ourselves. Yes, the Shadow DOM does provide protection from outside CSS creeping into our component, but the polyfill doesn’t. So, there really doesn’t appear to be much benefit to using it if we can do something more straightforward.

This is where our use of template literals comes in handy. Recall back in the component class, where we call the Template.render method:

this.root.innerHTML = Template.render({ useShadowDOM:
   Slider.USE_SHADOWDOM_WHEN_AVAILABLE && this.attachShadow });

Passing a boolean here indicates to the render function if we are using the Shadow DOM or not, and we can then modify the CSS to use the appropriate selectors. For example, if we were originally using :host as a selector, we should now use the component name for a selector. For the slider component, specifically,

:host { . . . } becomes wcia-slider { . . . }

:host .thumb { . . . }  or .thumb { . . . }  becomes wcia-slider .thumb { . . . }

With this in mind, and focusing on the slider component template module (components/slider/template.js), we can create some code in the next listing to use either one or the other.

Listing 12.4. Switching between Shadow DOM and non-Shadow DOM selectors
render(opts) {
   return `${this.css(opts.useShadowDOM)}            1
           ${this.html()}`;
},

createHostSelector(useshadow, host) {                2
   if (useshadow) {
       return ':host';
   } else {
       return host;
   }
},

css(useShadowDOM) {
   const comp = 'wcia-slider';                       3
   return `<style>
               ${this.createHostSelector(            4
                      useShadowDOM, comp)} {
                  display: inline-block;
                  position: relative;
                  border-radius: var(--border-radius);
               }

               ${this.createHostSelector(useShadowDOM, comp)} .bg-overlay {
                   width: 100%;
                   height: 100%;
                   position: absolute;
                   border

  • 1 Passes a boolean to the css function to indicate whether the Shadow DOM is used
  • 2 Returns the appropriate selector string for Shadow DOM or non-Shadow DOM usage
  • 3 Declares the component tag to use when generating the selector
  • 4 Dynamically creates the selector based on whether using the Shadow DOM and the name of the component

The exact same thing can be done in the coordinate picker component and the color picker component. There is one selector that’s a bit different in the color picker, however:

:host(.modal)

Remember that this selector simply states that if the color picker component has a class named modal on it, the background gets styled as a modal. To get what we want with no Shadow DOM, we’d want the following selector:

wcia-color-picker.modal

In this case, we’ll add on more function to handle the case in components/colorpicker/template.js, as seen in the next listing.

Listing 12.5. Handling a special case of class on component
createHostContextSelector(                                                     1
       useshadow, host, clazz) {
   if (useshadow) {
       return `:host(${clazz})`;
   } else {
       return host + clazz;
   }
},

css(useShadowDOM) {
   const comp = 'wcia-color-picker';
   return `<style>
               ...
               ${this.createHostContextSelector(useShadowDOM, comp, '.modal')} 2
               {
                   ${Modal.rules()}
               }

  • 1 New function that accepts Shadow DOM boolean, component name, and class to use on component
  • 2 Creates the selector :host(.modal) or wcia-color-picker.modal depending on if the ShadowDOM is used

A good JS homework challenge for you might to be to come up with a single function that handles all manner of :host variations and then build that into a base class that we can extend any Web Component’s Template module from. Again, as we look to the future in Web Components, these kinds of optimizations will be where a lot of exciting work will be done, and we won’t need new browser features to do it!

12.2. Building for the least common denominator

As you can see, there’s a fair bit to consider when building a component that might potentially be used when native Web Components aren’t available. It’s great that the Custom Element API is so easy to polyfill, but the simplicity stops there. It’s probably becoming apparent that components and, in fact, web development in general, play by different rules when using or not using the Shadow DOM.

When developing, whether using a polyfill or not, you’ll need to develop your component for the least common denominator. If not using the Shadow DOM, or not certain you are, you must plan your component as if you are not using it. You’ll also need to accept that polyfills have some major caveats. The most exciting aspect of the Shadow DOM is CSS encapsulation, but polyfills just don’t solve that. CSS rules can still creep in. They can also creep out of your component if your selectors aren’t properly set up to prevent this by making them specific to your component. Again, don’t just use .thumb; use my-component.thumb.

There have been a lot of similarities and repeated code when preparing your component to go Shadow DOM-less. When considering this code in combination with the repetitious code for attribute/property reflection in your components, it might be tempting to try out a Web Component framework or library.

LitElement (https://lit-element.polymer-project.org) by the Google Polymer team is shaping up to be a strong Web Component base class to provide all of this functionality. It definitely forces you into a few development patterns and expands upon the Web Components API with some more functionality. You might be looking to put some of these concerns and limitations out of your mind, so LitElement can be nice, especially as it promises to support down to IE11. StencilJS (https://stenciljs.com) by Ionic offers a slightly different approach. A developer would create a component with the framework, and it gets compiled down to a vanilla Web Component.

I’m sure we’ll see even more solutions going forward and solid future releases from LitElement and StencilJS. Personally, I’d rather avoid these solutions in my endeavors to avoid framework/library complexity, using only what I need. I also like to develop components without a build/compile step until releasing them, which both these solutions use during the development process.

At the end of the day, you should just use what works for your project. That said, all of the complexity we covered isn’t necessary when developing for modern browsers with native support for Web Components. Edge will hopefully be less of a concern in short order, given that more and more developers are leaving IE11 out of their browser requirements.

What happens when we do need to push forward and support IE11, though? The Custom Elements polyfill still works, so making our own elements as we have been doing isn’t a worry. The major problem left is a lack of support for newer JS features like Class. To move past this, we need to transpile and build! We’ll explore this next.

Of course, there will always be browser inconsistencies that need to be solved. In fact, our color picker component doesn’t quite work perfectly yet in Edge. To finish up here, let’s fix it so the color picker works perfectly in all modern browsers. Refer back to the slider component class in components/slider/slider.js:

setColor(color) {
   this.dom.overlay.style.background = `linear-gradient(to right, ${color}
     0%, ${color}00 100%)`;
}

In this function, we can use the hexadecimal color right in the linear gradient when showing the transparency fade. All other modern browsers support adding an extra two digits for an eight-character color. Those last two digits indicate a 0% transparency. Unfortunately, Edge does not support this. We’ll need to use RGBA-defined colors and get some conversion help from the Color utilities module, which we can import.

Listing 12.6. Fixing a linear gradient style rule for Edge
import Template from './template.js';
import Color from '../colorpicker/color.js';
export default class Slider extends HTMLElement {

...

setColor(color) {
   const rgb = Color.hexToRGB(color);
   this.dom.overlay.style.background = `linear-gradient(to right,
     rgba(${rgb.r}, ${rgb.g}, ${rgb.b}, 1) 0%, rgba(${rgb.r}, ${rgb.g},
     ${rgb.b}, 0) 100%)`;                                               1
}

  • 1 Changes the style rule for IE/Edge

After adding these changes, our component can be tested in every modern browser, including Edge!

12.3. Build processes

So far in this book, we’ve been doing things with no framework and no complicated workflows that do a bunch of stuff under the hood you aren’t aware of. It’s just us, a browser, and some HTML, JS, and CSS.

This isn’t the case with many modern web workflows. Many times, you won’t be running the same code in your browser as you write in your editor. There may be a build step in between. From using tools like Sass and LESS to compile your CSS to generating a big HTML file from various snippets you have organized in many different files, there are many reasons for building.

I could go on and on with reasons for using one or several build steps without even talking about JS. Frontend tasks like these, whether for HTML, CSS, or JS, are almost always run with Node.js. But which specific system should you use? The major ones that promise to do it all are Grunt and Gulp, but even more specific systems that promise to do one thing well tend to overlap. For example, Webpack is designed to bundle assets, but for many tasks, it can overlap with ones that Grunt and Gulp can do themselves.

With the web developer community releasing new tools every day, and a plethora of really solid build systems that can do it all, it can be confusing which tools to include in your toolbelt and what systems to use to orchestrate everything. Lately, though, there’s been bit of a trend toward simplicity when possible.

12.3.1. Using NPM scripts

Before delving into why we might want to include a build process in our Web Component workflow, let’s talk about a simple way to run tasks. You’ve probably used Node.js, even if only to install something. To refresh your memory, npm is the piece of the Node.js ecosystem for installing the JS package of your choice.

For example, if you wanted to install the Web Component polyfills, you’d go to the root of your project directory, fire up the terminal, and run

npm install @webcomponents/webcomponentsjs

This package would get installed in your project root in a node_modules folder. Of course, as you add more and more packages, it’s easy to lose track, which is why you’ll want some record that keeps track of your dependencies like this as well as other details of your project. That’s why the package.json file exists. It’s easy to create a new one from scratch. Again in the terminal, at the root of your project, run

npm init

You’ll be guided through some questions to fill in the details of your project, like name, email, package name, and so on. With a package.json in place, if you were to run the previous command to install webcomponentsjs, it would be added to the dependencies list in the JSON.

Or, if it’s a dependency intended only for your project’s developer workflow and not part of your production release, you’d run

npm install @webcomponents/webcomponentsjs --save-dev

Dependencies aside, the package.json file has another pretty powerful aspect. A scripts object can be added to run whatever you need. We can try running something simple pretty easily.

Listing 12.7. A simple package.json script
{
 "name": "wcia",
 "version": "1.0.0",
 "scripts": {
   "test": "echo 'Hello from package.json'"       1
 }
}

  • 1 Script to run

Basically, anything you might run in the terminal can be added here. The simple test in listing 12.7 uses the Linux echo command, which prints whatever message you give it as a line in your terminal. Windows users don’t need to feel left out, either, thanks to the Windows Subsytem for Linux (WSL; https://docs.microsoft.com/en-us/windows/wsl/install-win10). With this, Windows users can run the same Linux commands as Mac or Linux users. Even prior to WSL, which is definitely not perfect yet, just installing Git for Windows (https://gitforwindows.org) allowed a limited set of Bash commands that might just be enough.

The reason to bring this up is that npm scripts are becoming more and more part of a developer workflow instead of big, complex build systems like Grunt or Gulp. There’s absolutely nothing wrong with build systems like these for complicated and numerous tasks as part of a workflow. However, when just running a few simple tasks, there’s no need for all of the complexity. Build systems tend to have a bit of a learning curve. Running many different tasks will require researching the plugins you need and ironing out kinks when they don’t work together, but it also means you don’t need to write every little task, like copying files, running CSS preprocessors, uploading to a server, file concatenation, HTML templating, and so on. But if you need only a few tasks, and they are very easy to code yourself, there’s no reason you can’t go simple.

Over the next two chapters, we’ll be exploring a few basic ways to build and test. While the build and test tools themselves have various levels of complexity, the commands to launch them are incredibly simple. Even if you’re on Windows without the aforementioned WSL and just using the Git Bash emulation, the commands we’re running work, with one caveat when running tests that I’ll mention in the next chapter. Hence, we’ll be avoiding build systems as we explore build processes here, which allows us to focus on the specific tasks we’re running while avoiding lots of setup that’s not directly relevant to what we need to run. Most importantly, the choice of build system is up to you, should you want to use one.

12.4. Building components

Web Components are really no different than anything else in terms of how and why we’d use a build step for our JS. And just like anything else, complexity can grow as our project or component needs grow. What’s not clear quite yet is why we should build at all.

12.4.1. Why we build

There are numerous reasons to run a JS build process. One increasingly common reason is that a developer might prefer another language besides JS to code in. CoffeeScript was a popular language for writing web applications years ago, though these days, Microsoft’s TypeScript is the most popular non-JS language to create web applications with. TypeScript isn’t a completely different language, however—it’s a superset of JS with the addition of typed variables. It also offers the newest proposed JS features that haven’t made it into browsers yet. In fact, the publisher of this book has two really solid books on TypeScript that recently came out:

TypeScript is becoming more and more relevant for Web Component work as well. In addition to its standing as a popular language to work with in general, the LitElement and lit-html projects by Google’s Polymer team are written in TypeScript. Though writing your code with newer language features like decorators isn’t required, it’s strongly encouraged, as most of the examples are written this way.

It’s not just CoffeeScript and TypeScript, either—there are tons of languages that developers use to run code on the web. All of these languages have one thing in common, however. They don’t actually run in the browser. Your code is written in the language of your choice but then transpiled to JS so that it can run.

If transpiling sounds like a foreign concept, it’s very similar to compiling. Both allow you to write code and transform it to something that runs on the platform of your choice, like the web. Compiling means you’re targeting a lower level of abstraction, like bytecode. The output from a compiler is basically unreadable by human eyes.

Compilation can be almost thought of like capturing the audio of a spoken language and saving it as an audio waveform. It’s impossible to make out what someone is saying by looking at an audio waveform in your favorite sound editor, but you can certainly play it back and understand what’s being said perfectly fine.

Transpiling can be thought of more like translation, like from Spanish to English. Both Spanish and English are very readable languages if you know them, but, if not, a translation step helps you read in your native language.

Transpiling isn’t even about writing things in an entirely new language, either. Over the years, JS has added many new and exciting features, especially in 2015, when the ES6/ES2015 standard was released. Developers couldn’t use these features right away, though. Even if their favorite browser supported them, not all browsers did. Even now, while the major modern browsers have great support for ES6/ES2015 features, developers may want to target older browsers like IE. Even if that’s not the case, there are some great, brand-new JS language features or even experimental ones that developers want to use that don’t have any browser support just yet. For these types of use cases, Babel (https://babeljs.io) is likely the most widely used JS transpiler today.

Another big reason for modifying your JS code is to take many different source files and put them in a larger one. When the source code for your application starts growing into hundreds or thousands of lines of code, it’s bad practice to put all of your JS in one big file. For one thing, when working with a team, it’s easier to step on each other’s toes when modifying the same file. Second, your project is way more organized when the JS is properly split up. Pieces of functionality are easier to find when you don’t have to go hunting through one huge file. Additionally, when organized into smaller, well-named files, it’s easier to look at a project’s file structure and get a sense of what it does and how things work.

Despite the developer workflow improvements from smaller files, it’s better for the browser to have everything together in one file or, even better, smartly bundled into files that are loaded when functionality is needed. When things are bundled together, there are fewer network requests for the browser to handle. This is important because browsers do have a maximum number of requests at a time. Also, scripts may be slow to load due to network conditions. You can start to imagine what weird things might happen to your application when some functionality is present to load it, but then another script isn’t available yet because the network request is taking too long.

Prior to ES6/ES2015 modules, and ignoring similar solutions like RequireJS, JS code in separate files would be simply bundled together through concatenation tools. Essentially, concatenation just means putting the contents of each JS file into a bigger one in the order you specify. We still do something similar, but with modules, things need to be a bit smarter. Automated tools need to go through your code, track down the modules you reference through the import keyword, and bundle that into the final output. Even smarter, the better tools employ a method called tree shaking. If you import a module and don’t happen to use it anywhere in your code, it won’t bundle that particular module. Tree shaking is a smart way to ensure smaller JS bundles that include only the code you need.

Tools like Webpack (https://webpack.js.org) differentiate themselves even more by allowing you to create multiple output bundles and bundling more file types than just JS. These bundles are organized by functionality you’d need to run specific areas of your application. Web applications can be huge, and you may think of your application organized into different sections.

For example, if you were working on a banking web app, a user might view their recent transactions in one section but never visit another section to see their account info. There’s no reason in this scenario to force the user to load a bundle containing JS related to account info. Therefore, while the banking application could be one big module, it’s smarter to organize it into several bundles for each section of the application. Figure 12.1 highlights these main differences between simpler tools, like Rollup, and more complex tools, like Webpack.

Figure 12.1. Rollup vs. Webpack

Again, we’re back to a plethora of tools we can use! Either way, both transpiling and bundling are two major motivators behind having a build process for your JS.

12.4.2. Module bundling with Rollup

Although there are a good many tools for module bundling, Webpack has historically been fairly tricky and complex to set up for the easiest of tasks, while Rollup has been the simple but not as configurable alternative. Recent Webpack releases have changed how steep the learning curve is for doing simple things, while newcomer Parcel.js (https://parceljs.org) has gained popularity as well!

We just need to pick one to move forward with; and with these three great options in mind, I’d like to pick Rollup (https://rollupjs.org), as I have the most experience with it and appreciate its simplicity for getting up and running quickly. As with any npm install for a project, be sure to create a package.json file at the root of your project. Then, in the terminal, cd to your project root and run

npm install --save-dev rollup

Note that we used the --save-dev option here. Rollup will be added to your package .json as a dev dependency, meaning you don’t intend to do anything with Rollup besides have it help you with your development and build process. It’s not code that’s intended to be shipped with your component. Once finished, your package.json looks like the following listing (varying, of course, by how you named and versioned your project).

Listing 12.8. A package.json file after installing Rollup
{
 "name": "wcia",
 "version": "1.0.0",
 "dependencies": {},
 "devDependencies": {
   "rollup": "^1.0.2"         1
 }
}

  • 1 Rollup developer dependency

Something interesting to note is that you could have installed Rollup (or any package in general) globally with the -g option, like this:

npm install rollup -g

When doing a global install, Rollup could be run directly from your terminal, anywhere on your computer, simply by issuing the rollup command with some parameters. Here, we installed locally instead, as part of the project. As a local install, Rollup can still be run in your terminal with the shorthand rollup command because the install path is likely added to your environmental variables. I still don’t trust this! If you had several different Rollup installs on different projects, you’d be rolling the dice on which one you’re actually using. Instead, I like to be a bit more exact and execute it from within my project at node_modules/.bin/rollup. This seems a bit more complicated, but is more widely accepted than having a global install.

The reason it’s better is that if you wanted to get a team member set up with your project and tooling globally, you’d need to give them a handwritten list of everything they need to install to work with your project, which they’d install one by one. If there are a lot of dependencies, it’s easy to forget certain things, and it becomes a pain to debug why their build process doesn’t work. Instead, with a local install, everything they need is right there in the package.json and can be installed in one go with npm install.

It’s still a bit of a pain to have to type that whole path every time you want to run a build. The command becomes even longer as we add the parameters to indicate where the main JS entry point is, where the output file should be, and what its name is. That’s why we can make an entry in our package.json scripts object and add the command there.

Before we do that, however, we should change our Web Component structure a tiny bit. As an example, let’s start with the slider component from the last chapter, which was a small piece of the color picker component. Figure 12.2 shows its simple file structure along with the other components and design system modules.

Figure 12.2. Slider component files

Again, though the slider component worked perfectly in our local development environment (and is honestly so small it would probably work fine on the web), we’ll want to create a bundle such that the end user will load all the modules (slider.js, template.js, and all of the relevant bits of the custom-made design system). Those files should now be considered source files that aren’t directly consumed by end users. As such, we’ll create a src folder in each component directory and put the slider.js and template.js inside. We’ll do this for the other components as well. Figure 12.3 shows the new folder structure.

Figure 12.3. Slider and other component files with source folder

With this new folder structure, the input file for Rollup is now located at components/slider/src/slider.js. Nothing about the code inside this file changes except for one small detail. The good news is that our import paths are mostly relative to the component, so they shouldn’t need to change. When we import the Template module, it’s still located at ./template.js. The annoying bit is that when we fixed the transparency for Edge, we used the Color module from the color picker component. So now, instead of

import Color from '../../colorpicker/color.js';

we’ll need to change to

import Color from '../../colorpicker/src/color.js';

In the end, the output can be created where the original slider.js used to be. Those two parameters are the main ones Rollup needs to function! The command we’ll be running is

./node_modules/.bin/rollup chapter12and13/components/slider/src/slider.
   js --file chapter12and13/components/slider/slider.js --format umd
   --name slider -m

The complete directory path includes “chapter12and13” to match this book’s GitHub repo. The very first parameter is the location of the slider component’s source file. As the only required parameter, this first parameter is also the only one that doesn’t need a flag.

Second, we’ll need to specify the output file, passed by preceding the parameter with --file. Next is the output format, denoted by --format. There’s no right answer here, but I suggest using Universal Module Definition (UMD). When bundling as UMD, the JS can be loaded in a variety of ways. Two of those ways are CommonJS and Asynchronous Module Definition (AMD), which can be used in a variety of different scenarios, including with RequireJS. The last method that UMD enables is via a simple global definition, where no JS loading mechanisms are assumed. UMD attaches the slider component to the window as a global variable accessible anywhere from your page.

What’s the name of this global variable? That can easily be answered by using the --name parameter. We’ll call ours slider. Now, as a global variable, window.slider exists, but we’ll likely never use it since our component is set up automatically. You may want to be a bit more careful than me and use a name that could never be conflicted with in your application. Your component’s namespace could be a good candidate to include here, like MyNamespaceSlider, or your application name could be used—just something to make it unique.

An obvious question is whether we’re forgoing the ability to use the slider component as a normal ES6/ES2015 module, as we have been. We aren’t! If the larger application that contains the slider wants to import the module, it could easily import src/slider.js and use it, ignoring the generated bundle. This larger application could then bundle the application itself plus all the components within, using Rollup or whatever module bundler it prefers.

The very last flag, -m, turns on “source map” generation. If you’re not familiar with source maps, they bridge the generated output to the original source files. The “map” piece is a file with a .map extension, which is fairly unreadable by human eyes but contains lookup information to make this bridge possible. This might sound kind of meaningless without seeing it in action. You can try it yourself after we run the build, but figure 12.4 shows source maps in action. I’ve forced an error in my code. Though we’re using the output bundle we’ll generate next, our error shows the exact line where the error occurred in our source.

Figure 12.4. Source maps show where an error occurred in your source files, even when bundling output.

12.4.3. Running builds with npm

Now that we know how to build with Rollup, have planned the component’s file structure a little better, and know what to expect for the bundled output, let’s simplify bundling with Rollup. As discussed previously, we can easily add the Rollup bundle command to our package.json file. Normally, something simple would suffice. We could just call the task build and move on, like in the next listing.

Listing 12.9. Adding a Rollup script in package.json
{
 "name": "wcia",
 "version": "1.0.0",
 "dependencies": {},
 "devDependencies": {
   "rollup": "^1.0.2"
 },
 "scripts": {
 "build": "./node_modules/.bin/rollup chapter12and13/components/slider/src/1
   slider.js --file chapter12and13/components/slider/slider.js --format    1
   umd --name slider -m"                                                   1
}

  • 1 Rollup build script

So now, instead of typing a long and complicated command to build, we can simply run the new build command in the terminal at the root of the project:

npm run build

Ideally, the entire project would be this one slider component. We could then npm install the slider and use it in whatever project requires it (like the color picker). However, the way I set up the color picker project for this book, all of the components are together in the same project (and in the same chapter 12 folder). So, planning a strategy to accommodate this might be a bit of an odd challenge, but it actually exposes a neat way to run scripts.

We can start by adding two more build scripts into the package.json, as seen in the next listing. Since there are now three in total, we should be a little more specific in how we name them than just “build.”

Listing 12.10. Scripts to run each component build
{
 "name": "wcia",
 "version": "1.0.0",
 "dependencies": {},
 "devDependencies": {
   "rollup": "^1.0.2"
 },
"scripts": {
   "build-slider": "./node_modules/.bin/rollup
     chapter12and13/components/slider/src/slider.js --file
     chapter12and13/components/slider/slider.js --format umd
     --name slider -m",
   "build-coordpicker": "./node_modules/.bin/rollup
     chapter12and13/components/coordpicker/src/coordpicker.js --file
     chapter12and13/components/coordpicker/coordpicker.js --format umd
     --name coordpicker -m",                                             1
   "build-colorpicker": "./node_modules/.bin/rollup
     chapter12and13/components/colorpicker/src/colorpicker.js --file
     chapter12and13/components/colorpicker/colorpicker.js --format umd
     --name colorpicker -m"                                              2
}

  • 1 Rollup task for coordinate picker component
  • 2 Rollup task for color picker component

You may now be thinking that we have three commands to run instead of the one, but we can combine scripts! The ampersand or double ampersand isn’t strictly an npm thing. Instead, it’s just standard Linux, and we can use ampersands to combine commands in the package.json scripts. A single ampersand runs commands in parallel, and a double ampersand runs them one after another. Additionally, we can reference other scripts by name in any new commands. We’re going to add another build task after we finish covering Rollup, so let’s not call this new script build just yet. Instead, we’ll call it build-rollup:

"build-rollup": "npm run build-slider && npm run build-coordpicker && npm
    run build-colorpicker"

With build-rollup part of the npm scripts now, all three components can be built just by running

npm run build-rollup

Please note, however, that if you are on Windows, this ampersand approach won’t work without using WSL, the Git Bash emulator, or something similar.

12.5. Transpiling for IE

I mentioned an additional build step for our components. As of now, the color picker and two child components work in all major browsers, including Edge if we toggle the Shadow DOM off. As mentioned before, Edge will soon be updated with Chrome behind the scenes and will natively support Web Components.

That leaves us with one problem browser: IE11. It’s troubling because of its age and lack of updates. Modern browsers auto update, and web developers typically only have to worry about the latest few versions of each browser. So, we usually get to use the latest features in fairly short order, assuming all of the browsers keep up with each other. The thorn in our side here is IE. As IE11 is the last version that will ever be released, we’re stuck with the features it currently has. Some of us web developers have been able to ignore it as a requirement because its usage is so low, and Microsoft recommends Edge now for Windows users. But not all web developers are that lucky, and it’s still a requirement.

Not only does IE not support Web Components like the current version of Edge does, but it also does not support ES6/ES2015 language features like classes and fat arrow functions. We discussed transpiling earlier in this chapter as a way to do things like translate a language such as TypeScript or CoffeScript to JS, but we can use it now to solve the IE issue as well by transpiling newer JS to older JS.

12.5.1. Babel

The most popular tool to solve these issues is Babel (https://babeljs.io). We’ll need to npm install a few packages to make Babel work:

  • Babel Core—The main feature set of Babel.
  • Babel CLI—Tooling to use Babel on the command line.
  • Babel preset-env—Babel can get complicated; this standard setup takes the complicated setup out of Babel configuration.

Let’s go ahead and install these as dev dependencies in the root of the project because, like Rollup, this is all just build tooling and won’t be part of a component release:

npm install --save-dev @babel/core

npm install --save-dev @babel/cli

npm install --save-dev @babel/preset-env

After install, since they were saved, these dependencies get added onto the package .json. As of now, the following listing reflects the latest.

Listing 12.11. The latest package.json including Babel dependencies
{
 "name": "wcia",
 "version": "1.0.0",
 "dependencies": { },
 "devDependencies": {
   "@babel/cli": "^7.2.3",             1
   "@babel/core": "^7.2.2",            2
   "@babel/preset-env": "^7.2.3",      3
   "rollup": "^1.0.2",
 },
 "scripts": {
   "build-slider": "./node_modules/.bin/rollup
     chapter12and13/components/slider/src/slider.js --file
     chapter12and13/components/slider/slider.js --format umd
     --name slider -m",
   "build-coordpicker": "./node_modules/.bin/rollup
     chapter12and13/components/coordpicker/src/coordpicker.js --file
     chapter12and13/components/coordpicker/coordpicker.js --format umd
     --name coordpicker -m",
   "build-colorpicker": "./node_modules/.bin/rollup
     chapter12and13/components/colorpicker/src/colorpicker.js --file
     chapter12and13/components/colorpicker/colorpicker.js --format umd
     --name colorpicker -m",
   "build-rollup": "npm run build-slider && npm run build-coordpicker && npm
     run build-colorpicker",
 }
}

  • 1 Babel command line tooling
  • 2 Babel core library
  • 3 Babel preset environment for easy setup

With the requirements installed, Babel is super easy to use. Again, like Rollup, since we installed as a local instead of a global dependency, the Babel executable can be found in node_modules/.bin/babel.

Babel does not, however, solve module bundling. For this, we need an extra step. Plugins exist to take care of this extra step as part of the Rollup process. However, we’re starting to venture into territory where this entire setup is very opinionated and really depends on the needs of your project. For these components, my opinion is that we should make a different build for IE than what we’d deliver to modern browsers. The reason I think we should have different builds is that it’s unnecessary to overburden modern browsers with bulky transpiled code when there’s no reason to. But maybe having multiple builds hurts the simplicity of component delivery for you and your team. Ultimately, the choice is up to you, but for right now, I’m deciding on delivering two versions.

Since the Rollup bundle already exists, we can simply use that as a pre-bundled source that gets fed into Babel, so long as we’re careful to build it first in the build process. If you’re of the different opinion that these components would be better served by a single output file, Rollup can be configured to add this step with some extra configuration. It really depends on your use case and how your component will be consumed. For us, figure 12.5 represents our build pipeline.

Figure 12.5. The color picker build pipeline includes two builds, one for modern browsers and the other for IE11.

A Babel configuration file is needed, however, to use the preset-env settings. It’s really simple, though. At the root of the project, just create a .babelrc file containing the following:

{
 "presets": ["@babel/preset-env"]
}

That last bit is all the setup needed to run a Babel transpile. We’re just telling it to use the preset Babel settings in one line. Next, to run the command with that setting in place, you’ll just need to run the Babel command with an input file and an output file:

./node_modules/.bin/babel chapter12and13/components/slider/slider.js
   --out-file chapter12and13/components/slider/slider.build.js

The first parameter is the input, and, again, it’s the bundled output from Rollup. We’ll put the output in the same place, just called something slightly different, like slider.build.js. Surprisingly, unlike many commands you might run, this won’t produce any output in your terminal. You can easily verify that this is working by the file it creates.

Just like we did with the three Rollup scripts in the package.json file, we can add scripts for transpiling with Babel. The next listing shows the three new build scripts.

Listing 12.12. A Babel transpile step for each component
"build-slider-ie": "./node_modules/.bin/babel
  chapter12and13/components/slider/slider.js --out-file
  chapter12and13/components/slider/slider.build.js",
"build-coordpicker-ie": "./node_modules/.bin/babel
  chapter12and13/components/coordpicker/coordpicker.js --out-file
  chapter12and13/components/coordpicker/coordpicker.build.js",
"build-colorpicker-ie": "./node_modules/.bin/babel
  chapter12and13/components/colorpicker/colorpicker.js --out-file
  chapter12and13/components/colorpicker/colorpicker.build.js",

Again, like we did with Rollup, these commands can be combined into a single transpile step using ampersands:

"build-ie": "npm run build-slider-ie && npm run build-coordpicker-ie &&
npm run build-colorpicker-ie"

Of course, to transpile all three, we could use the terminal and run

npm run build-ie

Even better, let’s create a single script that bundles and transpiles. The next listing shows the complete package.json with a new “build” script.

Listing 12.13. Current package.json with Rollup bundling and Babel transpilation
{
 "name": "wcia",
 "version": "1.0.0",
 "dependencies": { },
 "devDependencies": {
   "@babel/cli": "^7.2.3",
   "@babel/core": "^7.2.2",
   "@babel/preset-env": "^7.2.3",
   "rollup": "^1.0.2",
 },
 "scripts": {
   . . . previously added scripts . . .
   "build": "npm run build-rollup           1
   && npm run build-ie"
 }
}

  • 1 New build script that bundles and transpiles all the components

And now, we’re back to a sane and easy-to-remember build process. Just use npm run build in your terminal, and all three components will be bundled and transpiled so IE can run perfectly!

Since I made the decision to have two different outputs, it makes sense to have two different HTML files, one for IE and the other for everything else. Of course, the file structure has changed with the addition of the source folder. Personally, I think it makes sense to use the original source files instead of the bundled Rollup output, so we can get instant feedback during development. Adding a Rollup “watch” task could do the job as well in a more complex system that’s constantly running while you develop, but in the interest of keeping things basic, we’ll just change the path slightly in demo.html:

<script type="module" src="src/slider.js"></script>

To get the IE demo to run, the <script> tag needs to be changed even more. As modules aren’t supported, it cannot contain type="module" anymore. We’ll create a different demo file for IE only called demo-ie.html. The <script> tag will be the only thing that changes so far:

<script src="slider.build.js"></script>

Of course, this step will be repeated for the other two components. Figure 12.6 shows the one component’s structure with output files.

Figure 12.6. Project file structure with bundled and transpiled output. Tools like Webstorm, pictured here, make the JS file look like a directory to hide the complexity of generated files like source maps, even though it’s actually a flat file structure.

12.5.2. CSS vars ponyfill

On further review of the components when testing in IE11 using the new demo file, things are a little less than perfect. Figure 12.7 shows some visual discrepancies. Otherwise, everything works just fine.

Figure 12.7. Looking a little broken in IE11

This isn’t a Web Components problem at all, but we did use CSS vars to make the components flexible in terms of style. CSS vars enabled us to tweak a global border radius, text color, and so on and affect everything on the page. The downside is that it’s a newer feature. Even with widespread browser support for CSS vars, IE11 just hasn’t added features lately, so it will fail to use them. Does this mean we need to back off CSS vars? Nope—like many features, we can make do. Normally I’d say “polyfill,” but in this case, I’ll be using a “ponyfill.”

To be honest, I hadn’t heard of ponyfills prior to researching how to handle CSS vars in IE11. Polyfills tend to modify the runtime environment of the browser. For example, when polyfilling Web Component Custom Elements, a global is created called customElements to match modern browsers where this is already present. Adding this global means that we’re modifying the browser, specifically adding the features provided to its global space. Ponyfills promise to not modify the browser environment when making unsupported features work.

The CSS vars ponyfill isn’t completely drop-in, meaning we’ll need to call a function to make it run. First, now that we have a package.json file, let’s install the ponyfill with npm. Since it is a client-side dependency, we’ll save it, but not as a dev dependency like the other build tools:

npm install css-vars-ponyfill

With this installed, the ponyfill can be added to each demo-ie.html file:

<script src="https://unpkg.com/@webcomponents/custom-elements"></script>
<script src="https://cdn.jsdelivr.net/npm/css-vars-ponyfill@1"></script>
<script src="slider.build.js"></script>

I’ll note that in my <script> tag, I’m using an online version just to give some awareness that it exists, but you could use that one or swap in the one that just installed at node_ modules/css-vars-ponyfill/dist/css-vars-ponyfill.js.

As mentioned, the css-vars-ponyfill isn’t a drop-in solution. We still need to call a function for it to do its job. It works by processing <style> tags on the page and swapping in CSS that IE will be able to understand. Since the component CSS isn’t available until after setting the innerHTML in each one, we’ll run the ponyfill after that. The next listing shows the slider component’s connectedCallback with the CSS vars ponyfill in place.

Listing 12.14. Adding the CSS vars ponyfill to allow existing CSS vars to work in IE
connectedCallback() {
   if (!this.initialized) {
       this.root.innerHTML = Template.render({ useShadowDOM:
         Slider.USE_SHADOWDOM_WHEN_AVAILABLE && this.attachShadow });
       this.dom = Template.mapDOM(this.root);
       if ( typeof cssVars !== 'undefined') {           1
           cssVars();                                   2
       }
       this.initialized = true;

       if (this.backgroundcolor) {
           this.setColor(this.backgroundcolor);
       }
       if (this.value) {
           this.refreshSlider(this.value);
       }
   }
}

  • 1 Tests if the ponyfill exists as added through the <script> tag on the demo page
  • 2 Calls the cssVars function to replace CSS vars in the browser

As we’ve just placed the script on the page, our usage just dictates that the cssVars function is attached to the global space (the opposite of how I described a ponyfill). This solution does exist as a module, however, that we could import and run that way. Here, though, we’re giving component consumers an opportunity to use the ponyfill or not based on if they added the script or not. Note that the syntax of how I check is a little weird. If I simply checked !cssVars when it didn’t exist, we’d get an error stating that cssVars is undefined, since it’s not a property of anything and could be just an undefined variable in the scope we’re checking in. So we’re being a little more careful in order to not throw the error by looking at its type.

Summary

In this chapter, you learned

  • A simple way to run scripts using npm and your package.json without having to rely on more complex build systems that require lots of setup
  • Reasons for a build step, whether bundling your code for production or transpiling to let newer JS features work in older browsers
  • How bundling is good for combining your code into one or more files, while intelligently leaving out unused imports
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.165.131