CodeDevStack

Face-api.js on NodeJS using your webcam.

February 10, 2020

Camera for face recognition

I started a small project using face-api.js on NodeJS, but first I wanted to test the library out. Most of the examples using the webcam are for the browser version of the module, where you can use getUserMedia and I needed a quick way to get feedback while working on my node script.

Fortunately there is an easy way to get data from the webcam and use it in your NodeJS script.

We’ll start with the base script that actually uses face-api.js, then the one which calls the webcam and we’ll explain them step by step.

const { getPicture } = require("./getPicture");
const faceapi = require("face-api.js");
const canvas = require("canvas");

const { Canvas, Image, ImageData } = canvas;
faceapi.env.monkeyPatch({ Canvas, Image, ImageData });

(async function run() {
  await faceapi.nets.ssdMobilenetv1.loadFromDisk("./weights");
  await faceapi.nets.faceExpressionNet.loadFromDisk("./weights");

  try {
    await getPicture();
    const image = await canvas.loadImage("./test_picture.jpg");

    const detections = await faceapi
      .detectAllFaces(image)
      .withFaceExpressions();

    if (!detections.length) return;
    console.log(detections);
  } catch (error) {
    console.log(error);
  }
})();

First we are importing the modules we need. getPicture is a local module which calls the webcam, face-api.js is the actual library we want to use and canvas allows us to do some graphic stuff from within NodeJS using a canvas like API.

faceapi.env.monkeyPatch({ Canvas, Image, ImageData });

This step is necessary in order to make our instance of canvas compatible with face-api.js.

await faceapi.nets.ssdMobilenetv1.loadFromDisk("./weights");
await faceapi.nets.faceExpressionNet.loadFromDisk("./weights");

We load the trained neural network’s weights. Being able to load them individually is most useful when doing something in the browser, to avoid downloading unnecessary data. Of course you’d be using await faceapi.nets.faceExpressionNet.loadFromUri().

await getPicture();
const image = await canvas.loadImage("./test_picture.jpg");

const detections = await faceapi
    .detectAllFaces(image)
    .withFaceExpressions();

if (!detections.length) return;
console.log(detections);

Then we getPicture(), which saves a picture to file using the webcam and then we process it with the faceapi API. We are doing a two step call, first detecting all faces and, after that, getting their expressions. This way we’ll know if the faces are happy, sad, angry, etc.

So how do we use the webcam? We can do something like this for the getPicture module.

const NodeWebcam = require("node-webcam");

function getPicture() {
  const opts = {
    width: 1280,
    height: 720,
    quality: 100,
    delay: 0,
    saveShots: true,
    output: "jpeg",
    device: false,
    callbackReturn: "location",
    verbose: false
  };

  return new Promise((resolve, reject) => {
    NodeWebcam.capture("test_picture", opts, function (err, data) {
      if (err)
        return reject(err);
      resolve(data);
    });
  });
}

exports.getPicture = getPicture;

We import the node-webcam module which can be installed via npm i node-webcam. We return a promise for easy handling and use it to get a picture. That picture is then loaded from file as showed above.

And there you have it! A simple way to get some quick webcam feedback when using face-api.js on NodeJS.