citation: OpenFace 2.0: Facial Behavior Analysis Toolkit Tadas Baltrušaitis, Amir Zadeh, Yao Chong Lim, and Louis-Philippe Morency, IEEE International Conference on Automatic Face and Gesture Recognition, 2018
Visual Studio download (VS required to run OpenFace via command line): https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=Community&rel=17
A detailed tutorial for using ExploFace: https://github.com/emrecdem/exploface/blob/master/TUTORIALS/tutorial1.ipynb
location code: https://github.com/WimPouw/EnvisionBootcamp2021/tree/main/Python/FaceTracking_OpenFace
packages to download: exploface
citation: Trujillo, J.P. & Pouw, W.(2021-11-18). An Introduction to OpenFace for Head and Face Tracking [day you visited the site]. Retrieved from: https://wimpouw.github.io/EnvisionBootcamp2021/OpenFace_module.html
from IPython.display import HTML
HTML('<iframe width="935" height="584" src="https://www.youtube.com/embed/mw8RymohMp0?start=8081" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>')
C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\display.py:717: UserWarning: Consider using IPython.display.IFrame instead warnings.warn("Consider using IPython.display.IFrame instead")
Download the latest binaries from https://github.com/TadasBaltrusaitis/OpenFace/wiki/Windows-Installation
Unzip the folder into a directory of your choosing -- we recommend unzipping into the FaceTracking_OpenFace directory
(NOTE: we have provided a 32bit and 64bit folder in the OSF download)
You will also need to download the models that OpenFace uses for Feature Detection This can be done using powershell (search in your taskbar for powershell, right-click --> Run as administrator Navigate to the OpenFace directory Run: powershell -ExecutionPolicy Bypass -File .\download_models.psy You won't see anything directly under your command after running it, but the top of the window should show that your system is downloading it </center>
To run offline , using a GUI, you can navigate to the main folder and run double-click OpenFaceDemo.exe
To run via command line , you need a cmd prompt open to the OpenFace directory For example, I have cmd line running in D:\data\MoCap\OpenFace_2.2.0_win_x64\OpenFace_2.2.0_win_x64
FeatureExtraction.exe is the main function for processing single faces FaceLandmarkVidMulti.exe is used when there are multiple faces
For example, to run OF on the sample video provided, we can type the following in cmd: D:\ FeatureExtraction.exe -f "./samples/2015-10-15-15-14.avi" OpenFace will update you on the progress </center>
By default, you get a folder in the OpenFace directory called processed where all the output is stored. However, this means all output .csv files, video files, etc are all thrown in one folder. If you don't want this, you can specify an output directory by adding the following to the FeatureExtraction command:
-out_dir "output_path"
Timing
Depending on your machine, OF takes approximately the duration of your video +20%
First, let's take a look at the output video. How does it perform? You should be thinking about the types of questions you want to ask using your data, and take a critical look at whether the tracking is sensitive and accurate enough to serve your purpose. Remember that (some) jitter can be removed with smoothing!
Now, let's take a look at the numerical data, and what was actually tracked.
Action Units
OpenFace recognizes a subset of all possible action units. These include:
45: blink
</center>
OpenFace provides two sets of columns for these AUs.
Presence: given in the AU*_c columns, this just indicates whethere the AU is present in a given frame
Rather than write our own scripts to summarize different aspects of the data, we can make use of ExploFace to do much of this (remember to install exploface before running! pip install exploface
import pandas as pd
import matplotlib.pyplot as plt
import exploface
openface_file = "./Timeseries_output/sample_vid/2015-10-15-15-14.csv"
openface_features = exploface.get_feature_time_series(openface_file)
This first command just gets the .csv file and loads into a dataframe. This can be a useful starting point if you want to run further analyses.
openface_features.head(5)
frame | face_id | timestamp | confidence | success | gaze_0_x | gaze_0_y | gaze_0_z | gaze_1_x | gaze_1_y | ... | AU12_c | AU14_c | AU15_c | AU17_c | AU20_c | AU23_c | AU25_c | AU26_c | AU28_c | AU45_c | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0 | 0.000 | 0.98 | 1 | -0.041376 | 0.020733 | -0.998928 | -0.096608 | 0.003713 | ... | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
1 | 2 | 0 | 0.033 | 0.98 | 1 | -0.071931 | 0.002496 | -0.997406 | -0.085965 | 0.002864 | ... | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
2 | 3 | 0 | 0.067 | 0.98 | 1 | -0.074580 | -0.003001 | -0.997211 | -0.097205 | -0.005973 | ... | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
3 | 4 | 0 | 0.100 | 0.98 | 1 | -0.079149 | 0.002348 | -0.996860 | -0.108939 | -0.002806 | ... | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
4 | 5 | 0 | 0.133 | 0.98 | 1 | -0.072152 | -0.004155 | -0.997385 | -0.110762 | -0.004892 | ... | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
5 rows × 714 columns
Exploface can do a bit more than this though. A useful feature here is to get some summary statistics of what's happening in your video.
stats_df = exploface.get_statistics(openface_file)
stats_df
nr_detections | average_length_detection | std_average_length_detection | |
---|---|---|---|
AU01 | 4 | 0.947500 | 0.777062 |
AU02 | 8 | 0.636250 | 0.660950 |
AU04 | 22 | 0.467727 | 0.456038 |
AU05 | 10 | 0.534000 | 0.312417 |
AU07 | 3 | 0.436667 | 0.265016 |
AU10 | 2 | 0.930000 | 0.848528 |
AU15 | 2 | 0.550000 | 0.070711 |
AU17 | 8 | 0.247500 | 0.190619 |
AU20 | 2 | 0.615000 | 0.643467 |
AU23 | 2 | 0.315000 | 0.205061 |
AU25 | 1 | 0.130000 | NaN |
AU26 | 1 | 1.440000 | NaN |
AU45 | 10 | 0.670000 | 0.598684 |
Here we see the Action Unit detections, along with how many there were of each one, and their durations.
One of the nicer features here is that we can also convert these .csv data into a .eaf file for use in ELAN. We could then import these annotations into ELAN for further checking/cleaning, or for further analysis in ELAN itself.
dataframe_timestamp = exploface.write_elan_file(feature_detections,
video_path=video_file,
output_path="sample_vid.eaf",
)
OpenFace provides some very useful output, and tracking quality seems to be good. However, note that you shouldn't take the output as true until you check it.
In particular, many studies use the AU output without any (or very little) quality control. However, a corpus project looking at facial signals in conversation (see Nota et al., 2021; https://doi.org/10.3390/brainsci11081017 ) attempted to use OpenFace, but went with manual coding for most features instead, as AU detection is far from 100% accurate. It can be an interesting starting point to explore data, but the explicit detections absolutely must be checked and cleaned.