Video

Being able to play videos directly inside a browser without having to worry about plugins is quite a joyous experience. Not only that, but since the video element is actually a native part of the DOM, that means we can also deal with it the same way as we do with all other DOM elements. In other words, we can apply CSS styles to a video element and the browser is more than happy to make things work for us. For example, suppose we want to create the effect of the video being played on a shiny surface, where the video reflects vertically and the reflection fades out, blending into the background, as in the following screenshot:

Video

Since the browser is in charge of rendering the video, as well as applying CSS styles and effects to all elements being managed by it, we don't have to worry about the logic involved in rendering a video with special effects added to it. Keep in mind, however, that the more CSS we throw on top of the video, the more work the browser will have to do to make the video look the way we want, which may quickly affect performance. However, if all we're adding to the video is a simple detail here and there, then most modern web browsers will have no problem rendering everything at full speed.

<style>
video {
  -webkit-box-reflect: below 1px;
  -webkit-transition: all 1.5s;
}

video {
  -webkit-filter: contrast(250%);
}

div {
  position: relative;
}

div img {
  position: absolute;
  left: 0;
  top: 221px;
  width: 400px;
  height: 220px;
}
</style>

<div>
  <video controls width="400" height="220"
    poster="bunny-poster.png">
    <!-- Video courtesy of http://www.bigbuckbunny.org -->
    <source src="bunny.ogg" type="video/ogg" />
    <source src="bunny.mp4" type="video/mp4" />
    <source src="bunny.webm" type="video/webm" />
  </video>
  <img src="semi-transparent-mask.png" />
</div>

Similar to the new HTML5 audio element, there are more or less two ways we can use the tag. One way is to simply create the HTML node, specify the same properties as the audio tag, specify one or more source nodes, and call it a day. Alternatively, we can use the JavaScript API available to us and programmatically manipulate the playback of the video file.

// Step 1: Create the video object
var video = document.createElement("video");
video.width = 400;
video.height = 220;
video.controls = true;
video.poster = "bunny-poster.png";

// Step 2: Add one or more sources
var sources = [
  {src: "bunny.ogg", type: "video/ogg"},
  {src: "bunny.mp4", type: "video/mp4"},
  {src: "bunny.webm", type: "webm"}
];

for (var i in sources) {
  var source = document.createElement("source");
  source.src = sources[i].src;
  source.type = sources[i].type;

  video.appendChild(source);
}

// Step 3: Make video player visible
document.body.appendChild(video);

We can also ignore the default controls and manage the playing, pausing, volume adjusting, and so on, on our own by taking advantage of the attributes available to the JavaScript object that references the video element. The following is a list of attributes and functions we can call on a video object.

Attributes

  • autoplay(Boolean)
  • currentTime(float—in seconds)
  • paused (Boolean)
  • controls (Boolean)
  • muted (Boolean)
  • width (integer)
  • height (integer)
  • videoWidth (integer—read only)
  • videoHeight (integer—read only)
  • poster (string—an image uri)
  • duration (int—read only)
  • loop (Boolean)
  • currentSrc (string)
  • preload (Boolean)
  • seeking (Boolean)
  • playbackRange (integer)
  • ended (Boolean)
  • volume (integer—between 0 and 100 exclusive)

Events

loadstart

The user agent begins looking for media data, as part of the resource selection algorithm.

progress

The user agent is fetching media data.

suspend

The user agent is intentionally not currently fetching media data.

abort

The user agent stops fetching the media data before it is completely downloaded, but not due to an error.

error

An error occurs while fetching the media data.

emptied

A media element whose networkState was previously not in the NETWORK_EMPTY state has just switched to that state (either because of a fatal error during load that's about to be reported, or because the load() method was invoked while the resource selection algorithm was already running).

stalled

The user agent is trying to fetch media data, but data is unexpectedly not forthcoming.

loadedmetadata

The user agent has just determined the duration and dimensions of the media resource and the text tracks are ready.

loadeddata

The user agent can render the media data at the current playback position for the first time.

canplay

The user agent can resume playback of the media data, but estimates that if playback were to be started now, the media resource could not be rendered at the current playback rate up to its end without having to stop for further buffering of content.

canplaythrough

The user agent estimates that if playback were to be started now, the media resource could be rendered at the current playback rate all the way to its end without having to stop for further buffering.

playing

Playback is ready to start after having been paused or delayed due to lack of media data.

waiting

Playback has stopped because the next frame is not available, but the user agent expects that frame to become available in due course.

seeking

The seeking IDL attribute changed to true.

seeked

The seeking IDL attribute changed to false.

ended

Playback has stopped because the end of the media resource was reached.

durationchange

The duration attribute has just been updated.

timeupdate

The current playback position changed as part of normal playback or in an especially interesting way, for example, discontinuously.

play

The element is no longer paused. Fired after the play() method has returned, or when the autoplay attribute has caused playback to begin.

pause

The element has been paused. Fired after the pause() method has returned.

ratechange

Either the default Playback Rate or the playback Rate attribute has just been updated.

volumechange

Either the volume attribute or the muted attribute has changed. Fired after the relevant attribute's setter has returned.

Note

For more information on events, visit W3C Candidate Recommendation Media Events at http://www.w3.org/TR/html5/embedded-content-0.html#mediaevents

One final reason that you should be excited about the new HTML5 video element is that each frame of the video can be rendered right into a canvas 2D rendering context, just as if a single frame was a standalone image. This way, we are able to do video processing right on the browser. Unfortunately, there is no video.toDataURL equivalent where we could export the video created by our JavaScript application.

var ctx = null;
var ctxOff = null;

var poster = new Image();
poster.src = "bunny-poster.jpg";
poster.addEventListener("click", initVideo);
document.body.appendChild(poster);

// Step 1: When the video plays, call our custom drawing function
video.autoplay = false;
video.loop = false;

// Step 2: Add one or more sources
var sources = [
  {src: "bunny.ogg", type: "video/ogg"},
  {src: "bunny.mp4", type: "video/mp4"},
  {src: "bunny.webm", type: "webm"}
];

for (var i in sources) {
  var source = document.createElement("source");
  source.src = sources[i].src;
  source.type = sources[i].type;

  video.appendChild(source);
}

// Step 3: Initialize the video
function initVideo() {
  video.addEventListener("play", initCanvas);
  video.play();
}

// Step 4: Only initialize our canvases once
function initCanvas() {
  // Step 1: Initialize canvas, if needed
  if (ctx == null) {
    var canvas = document.createElement("canvas");
    var canvasOff = document.createElement("canvas");

    canvas.width = canvasOff.width = video.videoWidth;
    canvas.height = canvasOff.height = video.videoHeight;

    ctx = canvas.getContext("2d");
    ctxOff = canvasOff.getContext("2d");

    // Make the canvas - not video player – visible
    poster.parentNode.removeChild(poster);
    document.body.appendChild(canvas);
  }

  renderOnCanvas();
}

function renderOnCanvas() {
  // Draw frame to canvas if video is still playing
  if (!video.paused && !video.ended) {

    // Draw original frame to offscreen canvas
    ctxOff.drawImage(video, 0, 0, canvas.width, canvas.height);

    // Manipulate frames offscreen
    var frame = getVideoFrame();

    // Draw new frame to visible video player
    ctx.putImageData(frame, 0, 0);
    requestAnimationFrame(renderOnCanvas);
  }
}

function getVideoFrame() {
  var img = ctxOff.getImageData
    (0, 0, canvas.width, canvas.height);

  // Invert the color of every pixel in the canvas context
  for (var i = 0, len = img.data.length; i < len; i += 4) {
    img.data[i] = 255 - img.data[i];
    img.data[i + 1] = 255 - img.data[i + 1];
    img.data[i + 2] = 255 - img.data[i + 2];
  }

  return img;
}

The idea is to play the video offscreen, meaning that the actual video player is never attached to the DOM. The video still plays, but the browser never needs to blitz each frame to the screen (it only plays in memory). As each frame is played, we draw that frame to a canvas context (just like we do with images), take the pixels from the canvas context, manipulate the pixel data, then finally draw it back on to the canvas.

Since a video is nothing more than a sequence of frames played one after the other, giving the illusion of animation, we can extract each frame from an HTML5 video and use it with the canvas API just like any other image. Since there isn't a way to draw to the video element, we simply keep on drawing each frame from the video player into a plain canvas object, achieving the same result—but with carefully crafted pixels. The following screenshot illustrates the result of this technique:

Events

One way to achieve this result is to create two canvas elements. If we only draw to the same canvas (draw the frame from the video, then manipulate that frame, then draw the next frame, and so on), the customized frame would only be visible for a fraction of a second. It would only be visible until we quickly drew the next incoming frame. In turn, this next frame would only be visible for as long as we looped through that frame's pixel data and redrew that frame again. You get the idea, the result would be messy, and not at all what we want.

So instead we use two canvas contexts. One context will be in charge of only displaying the pixels we work on (also known as, the manipulated pixels) and the other context will never be visible to the user and will serve the purpose of holding each frame as it comes straight from the video. This way, we're only drawing to our main, visible canvas once per iteration and all that's ever displayed in this canvas context is the manipulated pixels. The original pixels (also known as, the pixels from the original video that's playing in memory) will continue to be streamed to the offscreen canvas context as fast as they can.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.239.118