This content originally appeared on DEV Community and was authored by KUMAR HARSH
Unreal Webcam Fun
demo
So what are we going to be building today?
Today we are going to make a photo booth with JavaScript.
First of all, we've got our video being piped in from our webcam, and then we've got our video being piped into a canvas element. Then, once it's in a canvas element, we can start to do all kinds of cool things with it. Like we can take our photo, and then you can go ahead and download them. They'll download to your
computer, and you can see them as real photos. Then, in the scripts, once we have it, we can start to add effects, like different filters where things should be red. Or what we can do is make this really cool RGB split, where you can see my face.
Before we get started today, there is one
thing that we do need to get up and running, and that is a server. So far, in this course we've just been working off of
the file. However, because of security restrictions with
getting a user's webcam, it must be tied to what's called a "secure origin".
A secure origin is a website that
is HTTPS, or in our case localhost is also
a secure origin.
index.html
needs to be fed through some sort of server.
Wes included a package.JSON
file. If we open that up, you'll see one dependency, which is this called "browser-sync". It allows you to open up your website and start a little server, and it also gives you live reloading and a whole bunch of other stuff.
First we type npm install
when that has finished what you can type is npm start
.
This is the html we start with:
<div class="photobooth">
<div class="controls">
<button onClick="takePhoto()">Take Photo</button>
</div>
<canvas class="photo"></canvas>
<video class="player"></video>
<div class="strip"></div>
</div>
We quickly make a couple of selectors:
const video = document.querySelector('.player');
const canvas = document.querySelector('.photo');
const ctx = canvas.getContext('2d');
const strip = document.querySelector('.strip');
const snap = document.querySelector('.snap');
The first thing we want to do is get the video
being piped into that video element.
navigator.mediaDevices.getUserMedia({ video: true, audio: false })
Now that thing this going to return a promise so we call a .then
on it.
navigator.mediaDevices.getUserMedia({ video: true, audio: false })
.then(localMediaStream => {
console.log(localMediaStream);
Now what we need to do is to take our video, and we need to set the source to be that localMediaStream
Now, that's not going to work automatically, because localMediaStream
is an object. In order for our video to work, it actually needs to be converted into some sort of URL.
video.srcObject = localMediaStream;
video.play();
})
.catch(err => {
console.error(`OH NO!!!`, err);
});
.createObjectURL
, That's going to convert that media stream into something that this video player can understand.
Now what we will see is one or two frames not a continuous video stream.
Why is that?
That's because we set the video to be this media stream. But it's not going to update unless we actually go ahead and play it.
Therefore, underneath that we'll call video.play
, which is going to play it.
We need to do a catch here, just in case someone doesn't allow you to access their webcam. We need to handle that error.
Here is the complete getVideo
function:
function getVideo() {
navigator.mediaDevices
.getUserMedia({ video: true, audio: false })
.then((localMediaStream) => {
console.log(localMediaStream);
video.srcObject = localMediaStream;
video.play();
})
.catch((err) => {
console.error(`OH NO!!!`, err);
});
}
getVideo();
The next thing that we need to do is to take a frame from this video, and to paint it onto the actual canvas on the screen.
We'll first re-size our canvas according to height and width of actual video.
Now, we need to make sure that the canvas is the
exact same size before we paint to it. That's really important because if the canvas is not the same size as the video
or if your video webcam has a different width and height to it, then we need to change that.
const width = video.videoWidth;
const height = video.videoHeight;
canvas.width = width;
canvas.height = height;
Now, what we want to do is to every 16 milliseconds (random choice), we are going to take an image from the webcam
and put it into the canvas.
return setInterval(() => {
ctx.drawImage(video, 0, 0, width, height);
Here is the complete paintToCanvas
function:
function paintToCanvas() {
const width = video.videoWidth;
const height = video.videoHeight;
canvas.width = width;
canvas.height = height;
return setInterval(() => {
ctx.drawImage(video, 0, 0, width, height);
}
The way that dramImage
works is that you pass it an image, or a video element, and it will paint it right to it.
We're going to start at 00. Start at the top left-hand corner of the canvas, and then paint the width and the height. That's exactly why we resized our canvas.
We return that interval here, because if you ever need to stop this from painting, you can have access to that interval and
you can call clearInterval
on it.
It's kind of a pain to have to manually run this paintToCanvas
. So what we're going to do is, We're going to listen for an event
on the video element called canplay
canplay
- That's an event that the video will emit.
video.addEventListener("canplay", paintToCanvas);
Now, what we want to do is let's work on the takePhoto
function.
First of all we add capture sound to it for effect.
snap.currentTime = 0;
snap.play();
What we now need to do is take the data out of the canvas.
We can do this, const data = canvas.toDataURL
.
Then, you pass it, "image/jpeg".
The image that we currently have is in a text-based representation so we need to convert in into a link.
const link = document.createElement("a");
link.href = data;
link.setAttribute("download", "handsome");
We can now not only click photos but download them as well.
Now we want the photos to be visible on the screen as well:
link.innerHTML = `<img src="${data}" alt="Handsome Man" />`;
strip.insertBefore(link, strip.firstChild);
Here is the complete take photo function:
function takePhoto() {
// played the sound
snap.currentTime = 0;
snap.play();
// take the data out of the canvas
const data = canvas.toDataURL("image/jpeg");
const link = document.createElement("a");
link.href = data;
link.setAttribute("download", "handsome");
link.innerHTML = `<img src="${data}" alt="Handsome Man" />`;
strip.insertBefore(link, strip.firstChild);
}
The last thing that we want to do is do some filters on them.
So the way that a filter works is that you get the pixels out of the canvas, and then you mess with them, changing the
RGB values, and put them back in.
So let's go back up to our paintToCanvas
:
Here are the changes we make:
// take the pixels out
let pixels = ctx.getImageData(0, 0, width, height);
// mess with them
pixels = redEffect(pixels); //red filter
// pixels = greenScreen(pixels); //green screen effect
// pixels = rgbSplit(pixels); //rgb split effect
// ctx.globalAlpha = 0.8; //for ghosting effect
// put them back
ctx.putImageData(pixels, 0, 0);
}, 16);
Here is the completed function:
function paintToCanvas() {
const width = video.videoWidth;
const height = video.videoHeight;
canvas.width = width;
canvas.height = height;
return setInterval(() => {
ctx.drawImage(video, 0, 0, width, height);
// take the pixels out
let pixels = ctx.getImageData(0, 0, width, height);
// mess with them
pixels = redEffect(pixels); //red filter
// pixels = greenScreen(pixels); //green screen effect
// pixels = rgbSplit(pixels); //rgb split effect
// ctx.globalAlpha = 0.8; //for ghosting effect
// put them back
ctx.putImageData(pixels, 0, 0);
}, 16);
}
and now you create the functions for effects:
function redEffect(pixels) {
for (let i = 0; i < pixels.data.length; i += 4) {
pixels.data[i + 0] = pixels.data[i + 0] + 200; // RED
pixels.data[i + 1] = pixels.data[i + 1] - 50; // GREEN
pixels.data[i + 2] = pixels.data[i + 2] * 0.5; // Blue
}
return pixels;
}
function rgbSplit(pixels) {
for (let i = 0; i < pixels.data.length; i += 4) {
pixels.data[i - 150] = pixels.data[i + 0]; // RED
pixels.data[i + 500] = pixels.data[i + 1]; // GREEN
pixels.data[i - 550] = pixels.data[i + 2]; // Blue
}
return pixels;
}
function greenScreen(pixels) {
const levels = {};
document.querySelectorAll(".rgb input").forEach((input) => {
levels[input.name] = input.value;
});
for (i = 0; i < pixels.data.length; i = i + 4) {
red = pixels.data[i + 0];
green = pixels.data[i + 1];
blue = pixels.data[i + 2];
alpha = pixels.data[i + 3];
if (
red >= levels.rmin &&
green >= levels.gmin &&
blue >= levels.bmin &&
red <= levels.rmax &&
green <= levels.gmax &&
blue <= levels.bmax
) {
// take it out!
pixels.data[i + 3] = 0;
}
}
return pixels;
}
With this we are done with the project.
GitHub repo:
Blog on Day-18 of javascript30
Blog on Day-17 of javascript30
Blog on Day-16 of javascript30
Follow me on Twitter
Follow me on Linkedin
DEV Profile
You can also do the challenge at javascript30
Thanks @wesbos , WesBos to share this with us! ??
Please comment and let me know your views
Thank You!
This content originally appeared on DEV Community and was authored by KUMAR HARSH
KUMAR HARSH | Sciencx (2021-06-21T18:04:41+00:00) JavaScript-30-Day-19. Retrieved from https://www.scien.cx/2021/06/21/javascript-30-day-19/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.