CHAPTER 11

image

Appendix

It was difficult to narrow down content for this book, with a title like JavaScript Creativity it could have gone on forever! So this appendix is going to be fairly unstructured, with a few random tidbits that I wanted to mention but couldn’t put within the main bulk of the book.

The Future

We are at a very good point on the web, we no longer have to cope with many of the limitations that we’ve been used to over the years (ancient browsers, dial-up, etc.) but more importantly we have access to more than before. Browsers don’t just have better CSS; they now have features such as local storage and device APIs. We made heavy use of the Web Audio API and WebRTC in this book, but there are so many others. The laptop I am writing this on (a Macbook Pro) has an accelerometer that I can read using JavaScript. It may not be the most useful thing ever, but somebody might find a clever use for it (measuring the G-force of the train that I’m writing this on, perhaps?).

I named this book JavaScript Creativity not because you can make pretty things in canvas (although that is still an incredible feature to have), but instead because of the creative ways you can use web technologies these days. A webcam-controlled piano that generates notes on the fly? It is by no means perfect, but we’ve done that! A website that uses GPS to give you relevant news about the area you’re in? Absolutely doable. In Chapter 2 we produced line-art from a photo and turned that into a coloring book appthese things just were not possible even a few years ago.

There are a lot of features that are being added to the web (or at least being written about in specifications) that don’t have obvious use cases. Of course every feature is proposed for a reason, but some of the features (such as Device Proximity) could be used for a number of interesting non-obvious uses. And that is very good, that is “JavaScript Creativity”. In fact, I’ve seen robots that can navigate buildings, avoiding objects and even some that can flyall written using JavaScript.

The key to creating future technology isand always has beento ask “what if” questions. What if I could control a keyboard with my fingers? What if a news website knew my location? Not everything has an obvious answer, and I’ve even argued in the past that not everything should be made. But it is always worth considering circumstances and ideas, even if they seem outlandish at first.

Further Reading

While I was writing this book, I came across quite a few incredibly interesting articles and books that you may be interested in. They are not all directly related to the chapters in this book but they may worth reading anyway. Some are very technical, such as the intricacies of JavaScript or a particular algorithm, whilst others are more conceptual. As I’ve said throughout the book, you should not learn how to make a particular project but instead how to use the techniques in a variety of situations, so the conceptual and high-level articles (although not directly within the scope of this book) are worth reading. I will keep a more up-to-date list on my website at www.shanehudson.net/javascript-creativity/, as well as other resources for the book.

Algorithms

There are a number of algorithms mentioned throughout the book, as well as many others that are worth reading about if you are interested in the more theoretical side of things. More often than not, you can use prewritten algorithms, such as the js-objectdetect library we used in Chapter 9, but a lot of you will probably (like I do) have a hunger to learn the ins and outs of what you are working with. As algorithms are mostly published in academic journals, they may not be easily accessed online but there are usually articles online that do explain the algorithms (often written for lectures at universities). I will give a brief overview of a few algorithms that I recommend you learn more about, but I will avoid touching on the mathematical aspects.

In Chapter 2, we used a naïve edge detection algorithm to convert photos to line art. We did this by averaging the neighbors of each pixel and comparing them to a threshold. As I’m sure you can imagine, this is very inefficient. The final outcome was good enough, but far from perfect. In the chapter I mentioned that there is a better way using Canny Edge Detection, which was developed in 1986 by John F. Canny. It is a five-step algorithm. The first step is to apply a Gaussian filter, a form of blur, to smooth out the edges so that there is less noise; for example, this would smooth the grain of wood on a table so that it can more easily detect the edges of the table. The second step is to find gradients. These are found using the Sobel operator that helps you find both the magnitude and direction of the gradients. The next step is known as “Non-maximum suppression,” which essentially means converting blurred edges into sharp edges; this is basically done by removing all low values of gradients. The fourth step uses thresholds. We used thresholds in our naïve algorithm for edge detection; they are useful for lowering the chance of noise. The difference between our version and Canny is that his uses two, called double thresholding, so that both very high values and very low values count as edges but any values in the middle are likely to be noise. This makes it more accurate to find edges that are not obvious. Lastly, to rule out as much noise as possible, weak edges are only counted if they are attached to strong edges.

While I’m talking about algorithms, I would like to quickly remind you about the Fast Fourier Transform (FFT) that we used in Chapter 3 through the Web Audio API. You don’t need to implement it, as FFT is a part of the Web Audio API specification, so I won’t go into detail (besides, it is heavily math intensive) but it is worth remembering and some of you may be keen to delve further into Fourier transforms. Essentially these transforms, when used with audio (there are many other applications), are used to retrieve frequency data based on a snippet of audio. We used it for visualization of audio, but that is barely scratching the surface of what you can do with direct access to such raw data.

The final algorithm that was mentioned in the chapters was the Viola-Jones object recognition framework (it is classed as a framework because the original paper did not describe an implementation for the algorithm, just the components that are required), in explaining that I also briefly described Haar-like features. I will not go into further explanation here, but do look further into Viola-Jones and perhaps even try writing your own implementation of it.

Links

Throughout the book I’ve linked to various resources, I’ve collected these here so that they are easy to find.

Chapter 1

  • www.html5please.comHTML5 Please gives useful advice on whether a feature should be used in production, or if there are available polyfills.
  • www.caniuse.comCan I Use gives direct access to the data of browser support down to browser versions and inconstancies between the implementation and the specification.

Chapter 3

Chapter 4

  • mrdoob.github.io/three.js/—Three.js is a widely used wrapper for WebGL, which is used for 3D on the web.
  • typeface.neocracy.orgHere you can find the Helvetiker font that is used as the default font by Three.js.

Chapter 5

  • www.svgjs.comThis is a lightweight SVG library, which we used in Chapter 5 to handle creating the timeline in the music player.
  • github.com/mattdiamond/Recorderjs—Matt Diamond’s Recorder.js is a script that uses Web Workers to allow recording of Web Audio API nodes.

Chapter 6

Chapter 7

  • www.nodejs.orgThis is the official site for Node.js, which is the language used for all server-side code throughout the book.
  • www.github.com/creationix/nvmNode Version Manager (NVM) is recommended for handling multiple versions of Node.js.
  • www.nodejs.org/api/index.htmlThis is the Node.js documentation, which can be also found from the front page of the main Node.js site.

Chapter 8

  • www.ietf.org/rfc/rfc5245.txtRFC 5245 goes into a lot of detail about Interactive Connectivity Establishment (ICE). It is a heavy read, but very useful when dealing with WebRTC.

Chapter 9

  • www.github.com/mtschirs/js-objectdetectThis library, written by Martin Tschirsich, is a JavaScript implementation based on the Viola-Jones algorithm. It makes it easy to do object detection without getting bogged down with implementation details.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.22.41.235