An Introduction to OpenFace for Head and Face Tracking

James Trujillo ( james.trujillo@donders.ru.nl )
Wim Pouw ( wim.pouw@donders.ru.nl )
18-11-2021

Info documents

This Python coding module demonstrates how to use OpenFace, an open-source program that provides face and head tracking of images or videos. We'll go over basic installation, simple commands to run the tracking, and get a first look at the output data.

Installing OpenFace

Running OpenFace

FeatureExtraction.exe is the main function for processing single faces FaceLandmarkVidMulti.exe is used when there are multiple faces

For example, to run OF on the sample video provided, we can type the following in cmd: D:\ FeatureExtraction.exe -f "./samples/2015-10-15-15-14.avi" OpenFace will update you on the progress </center>


By default, you get a folder in the OpenFace directory called processed where all the output is stored. However, this means all output .csv files, video files, etc are all thrown in one folder. If you don't want this, you can specify an output directory by adding the following to the FeatureExtraction command: -out_dir "output_path"
Timing Depending on your machine, OF takes approximately the duration of your video +20%

OpenFace Output

OpenFace provides several types of output, including a video visualizing the tracking, as well as a .csv containing coordinate data, rotation data, and action units.

First, let's take a look at the output video. How does it perform? You should be thinking about the types of questions you want to ask using your data, and take a critical look at whether the tracking is sensitive and accurate enough to serve your purpose. Remember that (some) jitter can be removed with smoothing!

Now, let's take a look at the numerical data, and what was actually tracked.
Action Units
OpenFace recognizes a subset of all possible action units. These include:

Rather than write our own scripts to summarize different aspects of the data, we can make use of ExploFace to do much of this (remember to install exploface before running! pip install exploface

Importing OpenFace into ELAN Using ExploFace

This first command just gets the .csv file and loads into a dataframe. This can be a useful starting point if you want to run further analyses.

Exploface can do a bit more than this though. A useful feature here is to get some summary statistics of what's happening in your video.

Here we see the Action Unit detections, along with how many there were of each one, and their durations.
One of the nicer features here is that we can also convert these .csv data into a .eaf file for use in ELAN. We could then import these annotations into ELAN for further checking/cleaning, or for further analysis in ELAN itself.

Potential Applications

Notes on Reliability

OpenFace provides some very useful output, and tracking quality seems to be good. However, note that you shouldn't take the output as true until you check it.
In particular, many studies use the AU output without any (or very little) quality control. However, a corpus project looking at facial signals in conversation (see Nota et al., 2021; https://doi.org/10.3390/brainsci11081017 ) attempted to use OpenFace, but went with manual coding for most features instead, as AU detection is far from 100% accurate. It can be an interesting starting point to explore data, but the explicit detections absolutely must be checked and cleaned.