Being able to play videos directly inside a browser without having to worry about plugins is quite a joyous experience. Not only that, but since the video element is actually a native part of the DOM, that means we can also deal with it the same way as we do with all other DOM elements. In other words, we can apply CSS styles to a video element and the browser is more than happy to make things work for us. For example, suppose we want to create the effect of the video being played on a shiny surface, where the video reflects vertically and the reflection fades out, blending into the background, as in the following screenshot:
Since the browser is in charge of rendering the video, as well as applying CSS styles and effects to all elements being managed by it, we don't have to worry about the logic involved in rendering a video with special effects added to it. Keep in mind, however, that the more CSS we throw on top of the video, the more work the browser will have to do to make the video look the way we want, which may quickly affect performance. However, if all we're adding to the video is a simple detail here and there, then most modern web browsers will have no problem rendering everything at full speed.
<style> video { -webkit-box-reflect: below 1px; -webkit-transition: all 1.5s; } video { -webkit-filter: contrast(250%); } div { position: relative; } div img { position: absolute; left: 0; top: 221px; width: 400px; height: 220px; } </style> <div> <video controls width="400" height="220" poster="bunny-poster.png"> <!-- Video courtesy of http://www.bigbuckbunny.org --> <source src="bunny.ogg" type="video/ogg" /> <source src="bunny.mp4" type="video/mp4" /> <source src="bunny.webm" type="video/webm" /> </video> <img src="semi-transparent-mask.png" /> </div>
Similar to the new HTML5 audio element, there are more or less two ways we can use the tag. One way is to simply create the HTML node, specify the same properties as the audio
tag, specify one or more source
nodes, and call it a day. Alternatively, we can use the JavaScript API available to us and programmatically manipulate the playback of the video file.
// Step 1: Create the video object var video = document.createElement("video"); video.width = 400; video.height = 220; video.controls = true; video.poster = "bunny-poster.png"; // Step 2: Add one or more sources var sources = [ {src: "bunny.ogg", type: "video/ogg"}, {src: "bunny.mp4", type: "video/mp4"}, {src: "bunny.webm", type: "webm"} ]; for (var i in sources) { var source = document.createElement("source"); source.src = sources[i].src; source.type = sources[i].type; video.appendChild(source); } // Step 3: Make video player visible document.body.appendChild(video);
We can also ignore the default controls and manage the playing, pausing, volume adjusting, and so on, on our own by taking advantage of the attributes available to the JavaScript object that references the video element. The following is a list of attributes and functions we can call on a video object.
autoplay
(Boolean)currentTime
(float—in seconds)paused
(Boolean)controls
(Boolean)muted
(Boolean)width
(integer)height
(integer)videoWidth
(integer—read only)videoHeight
(integer—read only)poster
(string—an image uri)duration
(int—read only)loop
(Boolean)currentSrc
(string)preload
(Boolean)seeking
(Boolean)playbackRange
(integer)ended
(Boolean)volume
(integer—between 0 and 100 exclusive)For more information on events, visit W3C Candidate Recommendation Media Events at http://www.w3.org/TR/html5/embedded-content-0.html#mediaevents
One final reason that you should be excited about the new HTML5 video element is that each frame of the video can be rendered right into a canvas 2D rendering context, just as if a single frame was a standalone image. This way, we are able to do video processing right on the browser. Unfortunately, there is no video.toDataURL
equivalent where we could export the video created by our JavaScript application.
var ctx = null; var ctxOff = null; var poster = new Image(); poster.src = "bunny-poster.jpg"; poster.addEventListener("click", initVideo); document.body.appendChild(poster); // Step 1: When the video plays, call our custom drawing function video.autoplay = false; video.loop = false; // Step 2: Add one or more sources var sources = [ {src: "bunny.ogg", type: "video/ogg"}, {src: "bunny.mp4", type: "video/mp4"}, {src: "bunny.webm", type: "webm"} ]; for (var i in sources) { var source = document.createElement("source"); source.src = sources[i].src; source.type = sources[i].type; video.appendChild(source); } // Step 3: Initialize the video function initVideo() { video.addEventListener("play", initCanvas); video.play(); } // Step 4: Only initialize our canvases once function initCanvas() { // Step 1: Initialize canvas, if needed if (ctx == null) { var canvas = document.createElement("canvas"); var canvasOff = document.createElement("canvas"); canvas.width = canvasOff.width = video.videoWidth; canvas.height = canvasOff.height = video.videoHeight; ctx = canvas.getContext("2d"); ctxOff = canvasOff.getContext("2d"); // Make the canvas - not video player – visible poster.parentNode.removeChild(poster); document.body.appendChild(canvas); } renderOnCanvas(); } function renderOnCanvas() { // Draw frame to canvas if video is still playing if (!video.paused && !video.ended) { // Draw original frame to offscreen canvas ctxOff.drawImage(video, 0, 0, canvas.width, canvas.height); // Manipulate frames offscreen var frame = getVideoFrame(); // Draw new frame to visible video player ctx.putImageData(frame, 0, 0); requestAnimationFrame(renderOnCanvas); } } function getVideoFrame() { var img = ctxOff.getImageData (0, 0, canvas.width, canvas.height); // Invert the color of every pixel in the canvas context for (var i = 0, len = img.data.length; i < len; i += 4) { img.data[i] = 255 - img.data[i]; img.data[i + 1] = 255 - img.data[i + 1]; img.data[i + 2] = 255 - img.data[i + 2]; } return img; }
The idea is to play the video offscreen, meaning that the actual video player is never attached to the DOM. The video still plays, but the browser never needs to blitz each frame to the screen (it only plays in memory). As each frame is played, we draw that frame to a canvas context (just like we do with images), take the pixels from the canvas context, manipulate the pixel data, then finally draw it back on to the canvas.
Since a video is nothing more than a sequence of frames played one after the other, giving the illusion of animation, we can extract each frame from an HTML5 video and use it with the canvas API just like any other image. Since there isn't a way to draw to the video element, we simply keep on drawing each frame from the video player into a plain canvas object, achieving the same result—but with carefully crafted pixels. The following screenshot illustrates the result of this technique:
One way to achieve this result is to create two canvas elements. If we only draw to the same canvas (draw the frame from the video, then manipulate that frame, then draw the next frame, and so on), the customized frame would only be visible for a fraction of a second. It would only be visible until we quickly drew the next incoming frame. In turn, this next frame would only be visible for as long as we looped through that frame's pixel data and redrew that frame again. You get the idea, the result would be messy, and not at all what we want.
So instead we use two canvas contexts. One context will be in charge of only displaying the pixels we work on (also known as, the manipulated pixels) and the other context will never be visible to the user and will serve the purpose of holding each frame as it comes straight from the video. This way, we're only drawing to our main, visible canvas once per iteration and all that's ever displayed in this canvas context is the manipulated pixels. The original pixels (also known as, the pixels from the original video that's playing in memory) will continue to be streamed to the offscreen canvas context as fast as they can.
18.219.239.118