Some time ago I wrote an article about building a video before-and-after comparator for a client. My most recent project with the video graphics company sequence used a variation of the technique, but with something a little special…
One of the issues with playing back two video streams in a browser at the same time is that eventually they will get out of sync with each other: one might start a little earlier than the other, or play a frame or two faster. The earlier example was only a few seconds in length, making any synchronization issue minor, but in this case the full videos are over two minutes long, meaning that any syncing issue would quickly become very apparent.
There’s no clear performant way in JavaScript to sync two videos. It’s possible to read the time signature of one and try to force it on the other, but that means the second video isn’t playing per se, more like jumping from frame to frame, and usually skipping a few in the process.
My solution was to use the <canvas>
element. By starting both videos at the same time only when they were ready to play, and reading frames from them simultaneously using requestAnimationFrame
to paint on the <canvas>
element, I could pretty much guarantee the visual presentation of both videos at the same time.
The Markup
The HTML was very similar to my earlier example, only with an addition above and below the video code:
<canvas id="videoMerge" width="1024" height="576">
</canvas>
<div id="video-compare-container">
<video poster="sequence-logo-solid.png" id="rightVideo">
<source src="after.webm">
<source src="after.mp4">
</video>
<video poster="sequence-logo-wireframe.png" id="leftVideo">
<source src="before.webm">
<source src="before.mp4">
</video>
</div>
<div id="videoUI"></div>
The <canvas>
is the exact width and height of the final videos, the #videoUI
element is prepared to catch the buttons that will be progressively added to the page.
The CSS
The styling is also fairly straightforward; the#videoUI
container is set to display: flex
in order to evenly distribute the buttons that will be added to it:
#video-compare-container {
display: block;
line-height: 0;
}
#videoMerge, #videoUI {
width: 100%;
display: block;
margin: 0 auto;
background: url("sequence-background.png");
background-size: cover;
}
#videoUI {
font-family: Helvetica Neue Regular, Helvetica, Arial, sans-serif;
display: flex;
justify-content: space-between;
background: #f5f5f5;
}
#videoUI button {
padding: 1rem;
font-weight: 400;
font-size: 1.3rem;
border: none;
background: none;
cursor: pointer;
outline: none;
}
#videoUI button:hover {
background: #8e8b8b;
color: white;
}
The video container is provided with a background image representing the first (divided) frame of the video, since the videos themselves don’t appear on the <canvas>
element until the ‘Play` button is used.
The Script
The JavaScript is placed at the bottom of the page. The first part identifies the various elements, the initial position of the divider, the width and height of the videos, adds the Play
button, and hides the videos. (Note that videos can still be played and read with JavaScript, even if they are hidden).
var videoContainer = document.getElementById("video-compare-container"),
videoUI = document.getElementById("videoUI"),
videoMerge = document.getElementById("videoMerge"),
leftVideo = document.getElementById("leftVideo"),
rightVideo = document.getElementById("rightVideo"),
videoControl = document.createElement("button"),
position = 0.5,
vidHeight = 576,
vidWidth = 1024;
mergeContext = videoMerge.getContext("2d");
videoContainer.style.display = "none";
videoControl.innerHTML = "Play";
videoUI.appendChild(videoControl);
A function controls the pause and play of the videos:
videoControl.addEventListener("click", playPause, false);
function playPause() {
if (leftVideo.paused) {
videoControl.innerHTML = "Pause";
playVids();
} else {
leftVideo.pause();
rightVideo.pause();
videoControl.innerHTML = "Play";
}
}
The rest of the script is initiated when both videos are ready to play:
function playVids() {
if (leftVideo.readyState > 3 && rightVideo.readyState > 3) {
leftVideo.play();
rightVideo.play();
function trackLocation(e) {
position = ((e.pageX - videoMerge.offsetLeft) / videoMerge.offsetWidth);
if (position <= 1 && position >= 0) {
leftVideo.volume = position;
rightVideo.volume = (1 - position);
}
}
videoMerge.addEventListener("mousemove", trackLocation, false);
videoMerge.addEventListener("touchstart",trackLocation,false);
videoMerge.addEventListener("touchmove",trackLocation,false);
function drawLoop() {
mergeContext.drawImage(leftVideo, 0, 0, vidWidth, vidHeight,
0, 0, vidWidth, vidHeight);
mergeContext.drawImage(rightVideo,
(vidWidth * position).clamp(0.01,vidWidth), 0,
(vidWidth - (vidWidth * position)).clamp(0.01,vidWidth),
vidHeight,(vidWidth * position).clamp(0.01,vidWidth), 0,
(vidWidth - (vidWidth * position)).clamp(0.01,vidWidth), vidHeight);
requestAnimationFrame(drawLoop);
}
requestAnimationFrame(drawLoop);
}
}
The script starts by playing both videos, setting any interaction on them to the trackLocation()
function. Similar to the previous example, trackLocation()
determines the relative position of the interaction inside the video area from 0 (extreme left) and 1 (extreme right) and sets the volume of the leftVideo
to that amount, setting the music volume appropriately to the proportion of the visible finished video shown.
drawLoop
The drawLoop
function is the most complex aspect of the script: it must draw a portion of each video. You’ll notice that each mergeContext
has eight values:
- the first two values are the top left coordinates of the source (i.e. the video): an
x
value, followed by ay
. - the second pair of values are the bottom right coordinates of the source (again,
x
andy
). - the third pair of numbers represent the top left corner of the target (the
<canvas>
element), inx
andy
order. This is where the portion of the video frame (as determined by the first four coordinates) will start to be “painted”. - not surprisingly, the last pair of numbers (again,
x
andy
) is the bottom right corner of the target, where the painting of the copied video frame ends.
You can think of mergeContext
as “lifting” a portion of a frame from each (hidden) video and “pasting” it down onto the canvas. To make things easier, the first mergeContext
draws the complete frame of the left video onto the <canvas>
: starting from the top left corner of the element, all the way to the bottom right.
The second mergeContext
is tricker: it must copy the video from the position of the cursor or touch, relative to the video itself, all the way down to the bottom right corner of the video, and paste it into an appropriate area of the <canvas>
. The clamp
prototype is used to ensure that these computed coordinates do not exceed the bounds of the <canvas>
itself.
Number.prototype.clamp = function(min, max) {
return Math.min(Math.max(this, min), max);
};
This copied portion is pasting on top of the pasted left video, completing the effect.
Conclusion
It’s still possible for low-performing computers to skip frames using this solution - we’re still loading two 720p videos and trying to have them play back simultaneously - but the solution appears to work well for most.
Enjoy this piece? I invite you to follow me at twitter.com/dudleystorey to learn more.