Chapter 1. Introduction

Extraordinary claims require extraordinary evidence.

Dr. Carl Sagan

I believe WebAssembly is an ascendant technology that is in the process of transforming our industry. I do not believe WebAssembly is going to be transformative because I am writing a book on the topic. I am writing a book on it because I believe it will be transformative. So, how did I come to this conclusion and why do I think it is a good use of your time to learn more about these emerging standards from the World Wide Web Consortium (W3C)?

One of the greatest skills an engineer can develop is the ability to assess what a new technology brings to the table. As Dr. Fred Brooks of the University of North Carolina reminds us, there are no “silver bullets”1, everything has tradeoffs. Complexity is often not eliminated with a new technology, but is simply moved somewhere else. So when something does actually change what is possible or how we do our work in a positive direction, it deserves our attention and we should try to figure out why.

When trying to understand the implications of something new, I usually start by trying to determine the motivation of those behind it. Another good source of insight is where an alternative has fallen short. What has come before and how does it influence this new technology we are trying to decipher? As in art and music, we are constantly borrowing good ideas from multiple sources, so to truly understand why WebAssembly deserves our attention and what it provides, we must first look at what has preceded it and how it makes a difference.

In the paper that formally introduced the world to WebAssembly2, the authors indicate that the motivation was about rising to meet the needs of modern, web-delivered software in ways that JavaScript alone could not. Ultimately it was a quest to provide software that is:

  • safe

  • fast

  • portable

  • compact

In this vision, WebAssembly is centered at the intersection of software development, the web, its history and how it delivers functionality in a geographically-distributed space. Over time the idea has expanded dramatically beyond this starting point to imagine a ubiquitous, safe, performant computational platform that touches every aspect of our professional lives. WebAssembly will impact the worlds of client-side web development, desktop and enterprise applications, server-side functionality, legacy modernization, games, education, cloud computing, mobile platforms, Internet of Things (IoT) ecosystems, serverless and microservices initiatives and more. I hope to convince you of this over the course of this book.

Our deployment platforms are more varied than ever so we need portability at both the code and application levels. A common instruction set or byte code target can make algorithms work across various environments because we just need to map logical steps to how they can be expressed on particular machine architecture. Programmers use application programmer interfaces (API) such as OpenGL3, POSIX4 or Win325 because they provide the functionality to open files, spawn subprocesses or draw things to the screen. They are convenient and reduce the amount of code a developer needs to write, but they create a dependency on the presence of libraries to provide the functionality. If the API is not available in a target environment, the application will not run. This was one of the ways Microsoft was able to use its strength in the operating system marketplace to dominate in the application suite space as well. On the other hand, open standards can make it easier to port software into different environments.

Another issue with the runtime side of the software we are building is that different hosts have different hardware capabilities (number of cores, presence of GPUs) or security restrictions (whether files can be opened or network traffic can be sent or received). Software often adapts to what is available by using features-testing approaches to determine what resources an application can take advantage of, but this often complicates the business functionality. We simply cannot afford the time and money to rewrite software for multiple platforms constantly. We need better strategies for reuse. We also need this flexibility without the complexity of changing the code to support the environment that it will run. This increases the complexity of the testing and deployment strategies

After several decades, the value proposition of open source software is clear. We gravitate toward valuable, reusable components written by other developers as a means of satisficing6 our own needs. However, not all available code is trustworthy and we open ourselves up to software supply chain attacks when we execute untrusted bits we have downloaded from the internet. We become vulnerable to the risks, business impacts and personal costs of insecure software systems through phishing attacks, data breaches, malware and ransomware.

Until now, JavaScript has been the only story to solve some of these problems. When it is run in a sandboxed environment, it gives us some manner of security. It is ubiquitous and portable. The engines have gotten faster. The ecosystem has exploded into a cavalcade of productivity. Once you leave the confines of browser-based protections, however, we still have security concerns. There is a difference between JavaScript code running as a client and JavaScript running on the server. The single-threaded design complicates long-running or highly-concurrent tasks. Due to its design as a dynamic language, there are several classes of optimizations that are available to other programming languages that are, and will remain, unavailable as options to even the fastest and most modern JavaScript runtimes.

Additionally, it is too easy to add JavaScript dependencies and not realize how much baggage and risk is being pulled in transitively. Developers who do not take the time to consider these decisions carefully end up encumbering every aspect of upstream software testing, deployment and use. Each of these scripts has to be loaded and validated once it is all transferred over the network. This slows down the time to use and makes everything feel sluggish. When a dependent package is modified or removed, it has the potential to disrupt enormous amounts of deployed software7.

There is a perception among casual observers that WebAssembly is an assault on JavaScript, but that simply is not the case. Sure, you will be able to avoid JavaScript if you want to, but it is mostly about giving you options to solve problems in the language of your choice without requiring a separate runtime or having to care what language another piece of software is written in. It is already possible to use a WebAssembly module without knowing how it was built. This is going to increase the lifetime of business value we get out of our software and yet simultaneously allow us to innovate in adopting new languages without impacting everything else.

We have experienced several tools, languages, platforms and frameworks over the course of the past several decades that have attempted to solve these problems, but WebAssembly represents one of the first times we are getting it right. Its designers are not attempting to overspecify anything. They are learning from the past, embracing the web and applying problem space thinking to what is ultimately a hard and multi-dimensional problem. Let’s look at the formative influences on this exciting new technology before we dive into it further.

History of the Web

There is a running joke in the WebAssembly community8 that WebAssembly was “neither web nor assembly”. While this is true on some levels, its potential as a target platform that was vaguely assemblysque was certainly amplified by our concept of zero-installation delivery of functionality over the web. One of the main distinctions between “conventional software development” and “web development” is the fact there is effectively no installation required with the latter once you have a browser available. This is an enormous game changer in terms of costs to deliver and the ability to quickly turnaround new releases in the face of bugs and feature requests. When couched in other cross-platform technology ecosystems such as the internet and the web, it also makes supporting multiple hardware and software environments much easier too.

Sir Tim Berners-Lee, the inventor of the World Wide Web, worked at the European Organization for Nuclear Research (CERN)9 where he submitted a proposal for interlinking documents, images and data in the pursuance of CERN’s larger research goals. Even though the impact is clear in hindsight, he had to advertise his ideas internally several times before he was asked to act upon them10. As an organization, CERN was represented by dozens of research facilities around the world who sent scientists with their own computers, applications and data. There was no real capacity to force everyone to use the same operating systems or platforms so he recognized the need for a technical solution to solve the problem.

Prior to the web, there were services such as Archie11, Gopher12 and WAIS13, but he imagined a more user-friendly platform that was ultimately engendered as an application-level innovation at the top of the internet’s layered architecture. He also took ideas from the Standard Generalized Markup Language (SGML)14 as the basis of the Hypermedia Markup Language (HTML).

The results of these designs quickly became the major mechanism for delivering information, documentation and eventually application functionality to the world. It did so without requiring the various stakeholders to agree on specific technologies or platforms by defining the exchange of standards. This included both how requests were made and what was returned in response. Any piece of software that understood the standards could communicate with any other piece of softare that did as well. This gives us freedom of choice and the ability to evolve either side indepedent of the other.

History of JavaScript

The web’s interaction model is called HyperText Transfer Protocol (HTTP). It is based upon a constrained set of verbs for exchanging text-based messages. While it was a simple and effective model that was easy to implement, it was quickly seen to be inadequate to the task of interactive, modern applications because of inherent latencies in returning to the server constantly. The idea of being able to send code down to the browser has always been compelling. If it ran on the user’s side of the interaction, not every activity would require a return to the server. This would make web applications dramatically more interactive, responsive and enjoyable to use. How to achieve this was not entirely clear though. Which programming language would make the most sense? How would we balance expressive power with shallow learning curves so more individuals could participate in the development process? Which languages performed better than others and how would we protect sensitive resources on the client side from malicious software?

Most of the innovation in the browser space was originally driven by Netscape Communications Corp. Believe it or not, the Netscape Browser was originally a paid piece of software,15 but their larger interest was in selling server side software. By extending what was possible on the client, they could create and sell more powerful and lucrative server functionality.

At the time, Java was emerging from its beginnings as an embedded language for consumer devices, but it did not yet have much of a track record of success. It was a compelling idea as a simplified version of C++ that ran on a virtual platform and was therefore inherently cross-platform. As an environment designed to run software downloaded over the network, it had security built in via language design, sandboxed environments and flexible permission models.

Porting applications between various operating systems was tricky business and the prospect of not needing to do so created a frenzy around what the future of software development would be. Sun Microsystems found itself in the enviable position of having a solution to a perfect storm of problems and opportunities. Given this potential, discussions were underway to bring Java to the browser, but it was not clear what that deal would look like or when it would land.

As an Object-Oriented Programming (OOP) language Java contained sophisticated language features such as threads and inheritance. There was concern at Netscape that this might prove too difficult for non-professional software developers to master, so they hired Brendan Eich to create a “Scheme for the browser”16 imagining an easier, lightweight scripting language. Brendan had the freedom to make some decisions about what he wanted to include in the language, but was also under pressure to get it done as quickly as possible. A language for interactive applications was seen as a crucial step forward for this emerging platform and everyone wanted it yesterday. As Sebastian Peyrott notes in the blog post just cited, what emerged was “a premature lovechild of Scheme and Self, with Java looks.”

Initially JavaScript in the browser was limited to simple interactions such as dynamic menus, pop up dialogs and responding to button clicks. These were significant advances over roundtrips to the server for every action, but it was still a toy compared to what was possible on desktop and workstation machines at the time.

The company I worked for during the early days of the web created the first whole earth visualization environment involving terabytes of terrain information, hyperspectral imagery and pulling video frames from drone videos17. This of course required Silicon Graphics workstations initially, but was eventually able to run on PCs with consumer-grade graphics processing units (GPUs) within a couple of years. Nothing like that was remotely possible on the web back then, although, thanks to WebAssembly, that is no longer true18.

There was simply no confusing real software development from web development. As we have noted, though, one of the nice things about the separation of concerns between the client and the server was that the client could evolve independently of the server. While Java and the Java Enteprise model came to dominate the backend, JavaScript evolved in the browser and eventually became the dominant force that it is.

Evolution of the Web Platform

As Java applets and JavaScript became available in the Netscape browser, developers began to experiment with dynamic pages, animations and more sophisticated user interface components. For years these were still just toy applications, but the vision had appeal and it was not difficult to imagine where it could eventually lead.

Microsoft felt the need to keep up, but was not overly interested in directly supporting their competitors’ technologies. They (rightly) felt that this web development might eventually upend their operating system dominance. When they released Internet Explorer with scripting support, they called it JScript to avoid legal issues and reverse-engineered Netscape’s interpreter. Their version supported interaction with Windows-based Component Object Model (COM) components and had other twists that made it easy to write incompatible scripts between the browsers. Their initial support of the efforts to standardize JavaScript as ECMAScript waned for a while and eventually the Browser Wars19 began. This was a frustrating time for developers that ultimately involved anti-competitive lawsuits against Microsoft by the U.S. government.

As Netscape’s fortunes waned, Internet Explorer began to dominate the browser space and cross-platform innovation subsided for a while even as JavaScript went through the standardization process. Java applets became widely used in some circles but they ran in a sandboxed environment so it was trickier to use them as the basis for driving dynamic web page activity. You could certainly use Sun’s graphics and user interface APIs to do productive and fun things, but they ran in a separate memory space than the HTML Document Object Model (DOM). They were incompatible and had different programming and event models. User interfaces did not look the same between the sandboxed elements and the web elements. It was over all a wholly unsuitable development experience.

Other non-standard technologies such as ActiveX became popular in the Microsoft web development space. Macromedia’s Flash became Adobe’s Flash and had a short but active period of popularity for about a decade. The problems remained with all of these secondary options, however. The memory spaces were walled off from each other and the security models were less robust than anyone had hoped. The engines were new and under constant development so bugs were common. ActiveX provided code-signing protections, but no sandboxing so rather terrifying attacks became possible if certificates could be forged.

Firefox emerged from Mozilla as a viable competitor from the ashes of Netscape. It and Google’s Chrome eventually became suitable alternatives to Internet Explorer. Each camp had its adherents, but there was a growing interest in solving the incompatibilities between them. The introduction of choice in the browser space forced each of the vendors to work harder and do better to outshine each other as a means of achieving technical dominance and attracting market share.

As a result JavaScript engines got significantly faster. Even though HTML 4 was still “quirky” and painful to use across browsers and platforms, it was starting to be possible to isolate those differences. The combination of these developments and a desire to work within the structures of the standards-based evironments encouraged Jesse James Garrett20 to imagine a different approach to web development. He introduced the term Ajax which stood for the combination of a set of standards: Asynchronous JavaScript and XML. The idea was to let data from backend systems flow into the frontend applications which would respond dynamically to the new inputs. By working at the level of manipulating the DOM rather than having a separate, sandboxed user interface space, browsers could become universal application consumers in web-based client server architectures.

The long-suffering HTML 5 Standardization process had begun during this period as well in an attempt to improve consistency across browsers, introduce new input elements and metadata models and provide hardware-accelerated 2D graphics and video elements among other features. The convergence of the Ajax style, the standardization and maturation of ECMAScript as a language, easier cross-browser support and an increasingly feature-rich web-based environment caused an explosion of activity and innovation. We have seen innumerable JavaScript-based application frameworks come and go, but there was a steady forward momentum in terms of what was possible. As developers pushed the envelope, the browser vendors would improve their engines to allow the envelopes to be pushed further still. It was a virtuous cycle that ushered in new visions of the potential for safe, portable, zero-installation software systems.

As other obstacles and limitations were removed, this strange little language at the heart of it all became increasingly a high inertia drag on forward motion. The engines were becoming world class development environments with better tools for debugging and performance analysis. New programming paradigms such as the Promise-based21 style allowed better modularized and asynchronous-friendly application code to achieve powerful results in JavaScript’s notoriously single-threaded environment. But the language itself was incapable of the kinds of optimizations that were possible in other languages such as C or C++. There were simply limits on what was going to be possible from a language-performance perspective.

The web platform standards continued to advance with the development and adoption of technologies such as WebGL22 and WebRTC23. Unfortunately, JavaScript’s performance limitations made it ill-suited to extend the browsers with features involving low-level networking, multi-threaded code and graphics and streaming video codecs.

The platform’s evolution required the painful slog of the W3C member organizations to decide what was important to design and build and then roll it out in the various browser implementations. As people became ever more interested in using the web as a platform for heavier-weight, interactive applications, this process was seen as increasingly untenable. Everything either had to be written (or re-written) in JavaScript or the browsers had to standardize the behavior and interfaces. We collectively had to wait on the process and it could take years to realize new advancements.

It was for these and other reasons that Google began to consider an alternative approach to safe, fast and portable clientside web development.

Native Client (NaCl)

In 2011, Google released a new open source project called Native Client (NaCl). The idea was to provide near-native speed execution of code in the browser while running in a limited privilege sandbox for safety reasons. You can think of it a bit like ActiveX with a real security model behind it. The technology was a good fit for some of Google’s larger goals such as supporting ChromeOS and moving things away from desktop applications into web applications. It was not initially meant necessarily to extend the capabilities of the open web for everyone.

The uses cases were mainly to support browser-based delivery of computationally-intensive software such as:

  • games

  • audio and video editing systems

  • scientific computing and CAD systems

  • simulations

The initial focus was on C and C++ as source languages, but because it was based upon the LLVM24 compiler toolchain, it would be possible to support additional languages that could generate the LLVM Intermediate Representation (IR). This will be a recurring theme in our transition to WebAssembly as you will see.

There were two forms of distributable code here. The first was the eponymous NaCl which resulted in “nexe” modules that would target a specific architecture (e.g. ARM or x86-64) and could only be distributed through the Google Play store. The other was a portable form called PNaCl25 that would be expressed in LLVM’s Bitcode format making it target independent. These were called “pexe” modules and would need to be transformed into a native architecture in the client’s host environment.

The technology was successful in the sense that the performance demonstrated in browser was only minimally off of native execution speeds. By using software fault isolation (SFI) techniques, they created the ability to download high performance, secure code from the web and run it in browsers. Several popular games such as Quake and Doom were compiled to this format to show what was ultimately possible. The problem was that the NaCl binaries would need to be generated and maintained for each target platform and would only run in Chrome. They also ran in an out-of-process space so they could not directly interact with other Web APIs or JavaScript code.

While running in limited privilege sandboxes was achievable, it did require static validation of the binary files to ensure that they did not attempt to invoke operating system services directly. The generated code had to follow certain address boundary-alignment patterns to make sure they did not violate allocated memory spaces.

As indicated above, the PNaCl modules were more portable. The LLVM infrastructure could generate either the NaCl native code or the portable Bitcode without modifying the original source. This was a nice outcome, but there is a difference between code portability and application portability. Applications require the APIs that they rely upon to be available in order to work. Google provided an application binary interface (ABI) called the Pepper APIs26 for low-level services such as 3D graphics libraries, audio playback, file access (emulated over IndexedDB or LocalStorage) and more. While PNaCl modules could run in Chrome on different platforms because of LLVM, they could only run in browsers that provided suitable implementations of the Pepper APIs. While Mozilla had originally expressed interest in doing so, they eventually decided they wanted to try a different approach that came be known as asm.js. NaCl deserves a tremendous amount of credit for moving the industry in this direction, but it was ultimately too fiddly and too Chrome-specific to carry the open web forward. Mozilla’s attempt was more successful on that front even if it did not provide the same level of performance that the native client approach did.

asm.js

The asm.js27 project was at least partially motivated by an attempt to bring a better gaming story to the web. This soon expanded to include a desire to allow arbitrary applications to be delivered securely to browser sandboxes without having to substantively modify the existing code.

As we have previously discussed, the browser ecosystem was already advancing to make 2D and 3D graphics, audio handling, hardware-accelerated video and more available in standards-based, cross-platform ways. The idea was that operating within that environment would allow applications to use any of those features which were defined to be invoked from JavaScript. The JavaScript engines were efficient and had robust sandboxed environments that had undergone significant security audits so no one felt like starting from scratch there. The real issue remained the inability to optimize JavaScript ahead-of-time (AoT) so runtime performance could be improved even further.

Because of its dynamic nature and lack of proper integer support, there were several performance obstacles that could not meaningfully be managed until the code was loaded into the browser. Once that happened, Just-in-Time (JIT) optimizing compilers are able to speed things up nicely, but there were still inherent issues like slow bounds-checked array references. While JavaScript in its entirety could not be optimized ahead-of-time, a subset of it could be.

The exact details of what that means are not super relevant to our historical narrative, but the end result is. asm.js also used the LLVM-based clang28 front-end parser via the Emscripten29 toolchain. Compiled C and C++ code is very optimizable ahead-of-time so the generated instructions could be made very fast through existing optimization passes. LLVM represents a clean, modular architecture so pieces of it can be replaced including the backend generation of machine code. In essence, they could reuse the first two stages (parsing and optimization) and then emit this subset of JavaScript as a custom backend. Because the output was all “just JavaScript”, it would be much more portable than the NaCl/PNaCl approach. The tradeoff unfortunately was in performance. It represented a significant improvement over straight JavaScript, but was not nearly as performant as Google’s approach. It was good enough to amaze developers, however. Beyond the modest performance improvements, however, the mere fact that you could deploy existing C and C++ applications into a browser with reasonable performance and virtually no code changes was compelling. While there were extremely compelling demos involving the Unity engine30, let’s look at a simple example. “Hello, World!” seems like a good place to start.

#include <stdio.h>
int main() {
  printf("Hello, world!
");
  return 0;
}

Notice there is nothing unusual about this version of the classic program. If you stored it in a file called hello.c, the emscripten toolchain would allow you to emit a file called a.out.js which can be run directly in Node.js or, via some scaffolding, in a browser.

brian@tweezer ~/s/w/ch01> emcc hello.c
brian@tweezer ~/s/w/ch01> node a.out.js
Hello, world!

Pretty cool, no?

There’s only one problem.

brian@tweezer ~/s/w/ch01> ls -laF a.out.js
-rw-r--r--  1 brian  staff  116450 Aug 17 19:17 a.out.js

That is an awfully large hello world program! A quick look at the native executable might give you a sense of what is going on.

brian@tweezer ~/s/w/ch01> clang hello.c
brian@tweezer ~/s/w/ch01> ls -alF a.out
total 320
drwxr-xr-x  6 brian  staff     192 Aug 17 19:23 ./
drwxr-xr-x  3 brian  staff      96 Aug 17 19:17 ../
-rwxr-xr-x  1 brian  staff   12556 Aug 17 19:23 a.out*

Why is our supposedly optimized JavaScript program ten times bigger than the native version? It is not just because as a text-based file JavaScript is more verbose. Look at the program again:

#include <stdio.h> 1
int main() {
  printf("Hello, world!
"); 2
  return 0;
}
1

The header identifies the source for standard IO-related function definitions.

2

The reference to the printf function will be satisfied by a dynamic library loaded at runtime.

If we look at the symbols defined in the compiled executable, we will see that the definition of the printf function is not contained in the binary. It is marked as “undefined”.

brian@tweezer ~/s/w/ch01>brian@tweezer ~/s/w/ch01> nm -a a.out
0000000100002008 d __dyld_private
0000000100000000 T __mh_execute_header
0000000100000f50 T _main
                 U _printf
                 U dyld_stub_binder

When clang generated the executable it left a placeholder reference to the function that it expects to be provided by the operating system. There is no standard library available in this way for a browser, at least not in the dynamically-loadable sense, so that library function and anything it needs also need to be provided. Additionally, this version cannot talk directly to the console in a browser, so it will need to be given hooks to call into the browser’s console.log functionality or some other way that is available. In order to work in the browser, then, the functionality has to be shipped with the application which is why it ends up being so big.

This highlights nicely the difference between portable code and portable applications which will be a common theme in this book. For now, we can marvel that it works at all, but there is a reason this book is not called “asm.js : The Definitive Guide”. It was a remarkable stepping stone that demonstrated it was possible to generate reasonably performant sandboxed JavaScript code from various optimizable languages. The JavaScript itself could be optimized further as well in ways that the superset could not. By being able to generate this subset through LLVM-based toolchains and a custom backend, the level of effort was much smaller than it might otherwise have been.

asm.js represents a nice fallback position for browsers that do not support the WebAssembly standards, but it is now time to set the stage for the subject of the book.

Rise of WebAssembly

With NaCl, we found a solution that provided sandboxing and performance. With PNaCl, we also found platform portability but not browser portability. With asm.js, we found browser portability and sandboxing, but not the same level of performance. We also were limited to dealing with JavaScript which meant we could not extend the platform with new features (e.g. efficient 64-bit integers) without first changing the JavaScript language. Given that this was governed by an international standards organization, this was unlikely to be an approach with quick turnarounds.

Additionally, JavaScript has certain issues with how browsers loaded and validated it from the web. The browser has to wait until it finishes downloading all of the referenced files before it starts to validate and optimize them (while further optimizations will require us to wait until the application is already running). Given what we have already said about how developers encumber their applications with ridiculously large amounts of transitive dependencies, the network transfer and load-time performance of JavaScript is another bottleneck to overcome beyond the established run-time issues.

After seeing what was possible with these partial solutions, there became a strong appetite for high-performance, sandboxed, portable code. Various stakeholders in the browser, web standard and JavaScript environments felt a need for a solution that worked within the confines of the existing ecosystem. There had been a tremendous amount of work to get the browsers as far as they had gotten. It was entirely possible to create dynamic, attractive and interactive applications across operating system platforms and browser implementations. With just a bit more effort it seemed possible to merge these visions together into a unifying, standards-based approach.

It was under these circumstances in 2015 that none other than Brendan Eich, the creator of Javascript, announced that work had begun on WebAssembly31. He highlighted a few specific reasons for the effort and called it a “binary syntax for low-level safe code, initially co-expressive with asm.js, but in the long run able to diverge from JS’s semantics, in order to best serve as common object-level format for multiple source-level programming languages.”

He continued: “Examples of possible longer-term divergence: zero-cost exceptions, dynamic linking, call/cc. Yes, we are aiming to develop the Web’s polyglot-programming-language object-file format.”

As to why these various parties were interested in this, he offered this justification: “asm.js is great, but once engines optimize for it, the parser becomes the hot spot — very hot on mobile devices. Transport compression is required and saves bandwidth, but decompression before parsing hurts.”

And finally, perhaps the most surprising part of the announcement was who was to be involved: “A W3C Community Group, the WebAssembly CG, open to all. As you can see from the github logs, WebAssembly has so far been a joint effort among Google, Microsoft, Mozilla and a few other folks. I’m sorry the work was done via a private github account at first, but that was a temporary measure to help the several big companies reach consensus and buy into the long-term cooperative game that must be played to pull this off.”

In short order, other companies such as Apple, Adobe, AutoCAD, Unity, Figma and more got behind the effort. Inexplicably, this vision that had started decades before and had involved no end of conflict and self-interest was transforming into a unified initiative to finally bring us a safe, fast, portable and web-compatible runtime environment.

There was no end to the potential confounding complexities involved in bringing this platform into existence. It was not entirely clear exactly what should be specified upfront. Not every language supported threads natively. Not every language uses exceptions. C/C++ and Rust were examples of languages that had no equivalent runtime doing garbage collection. The Devil is always in the details, but the will to collaborate was there. And, as they say, where there is a will, there is a way.

Over the next year or so, the CG became a W3C Working Group (WG) which was tasked with defining actual standards. They made a series of decisions to define a Minimum Viable Product (MVP) WebAssembly platform that would be supported by all major browser vendors. Additionally, the Node.js community was excited as this could provide a solution to the drudgery of managing native libraries for the portions of Node applications that needed to be written in a lower-level language. Rather than having dependencies on Windows, Linux and macOS libraries, a Node application could have a WebAssembly library that could be loaded into the V8 environment and converted to native assembly code on-the-fly. Suddenly WebAssembly seemed poised to move beyond the goals of deploying code in browsers, but let’s not get ahead of ourselves. We have the rest of this book to tell you that part of the story.

1 https://en.wikipedia.org/wiki/No_Silver_Bullet

2 http://dx.doi.org/10.1145/3062341.3062363

3 https://www.opengl.org

4 https://en.wikipedia.org/wiki/POSIX

5 https://en.wikipedia.org/wiki/Windows_API

6 https://en.wikipedia.org/wiki/Satisficing

7 https://en.wikipedia.org/wiki/Npm_(software)#Notable_breakages

8 Best anyone can tell it was https://twitter.com/jfbastien who first said it, but even he is not sure.

9 https://home.cern

10 On his own time!

11 https://en.wikipedia.org/wiki/Archie_(search_engine)

12 https://en.wikipedia.org/wiki/Gopher_(protocol)

13 https://en.wikipedia.org/wiki/Wide_area_information_server

14 https://en.wikipedia.org/wiki/Standard_Generalized_Markup_Language

15 I bought a license for Netscape 1.0 Silicon Graphics IRIX at the time. I still have the CD floating around some place for… historical reasons.

16 https://auth0.com/blog/a-brief-history-of-javascript/

17 https://en.wikipedia.org/wiki/Autometric

18 https://www.google.com/earth

19 https://en.wikipedia.org/wiki/Browser_wars

20 https://en.wikipedia.org/wiki/Jesse_James_Garrett

21 https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise

22 https://en.wikipedia.org/wiki/WebGL

23 https://en.wikipedia.org/wiki/WebRTC

24 https://llvm.org

25 Pronounced “pinnacle”.

26 Because, NaCl… get it?

27 https://asmjs.org

28 https://clang.llvm.org

29 https://emscripten.org

30 https://beta.unity3d.com/jonas/AngryBots/

31 https://brendaneich.com/2015/06/from-asm-js-to-webassembly/

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.196.217