Masked-Piper

This python notebook runs you through the procedure of taking videos as inputs with a single person in the video, and outputting the 1) a masked video with facial, hand, and arm kinematics ovelayen, and 2) outputs the kinematic timeseries. This tool is a simple but effective modification of the the Holistic Tracking by Google's Mediapipe so that we can use it as a CPU-based light weigth tool to mask your video data while maintaining background information, and also preserving information about body kinematics.

Current Github: https://github.com/WimPouw/TowardsMultimodalOpenScience

Additional information backbone of the tool (Mediapipe Holistic Tracking)

https://google.github.io/mediapipe/solutions/holistic.html

Modification that is the basis of this tool

Our modification of the Mediapipe tool is using the body sillhoette to distinguish the background from the body contained in the video, then track the body, and create new video that only keeps the background, masks the body, and overlays the kinematics back onto the mask. We further modify the original code so that timeseries are produced that provide all the kinematic information per frame over time.

Use

Make sure to install all the packages in requirements.txt. Then move your videos that you want to mask into the input folder. Then run this code, which will loop through all the videos contained in the input folder; and saves all the results in the output folders.

Please use, improve and adapt as you see fit. This tool will become citable in the near future.

Team: Babajide Owoyele, James Trujillo, Gerard de Melo, Wim Pouw (wim.pouw@donders.ru.nl)

Example

Main procedure Masked-Piper

The following chunk of code loops through all the videos you have loaded into the input folder, then assess each frame for body poses, extract kinematic info, masks the body in a new frame that keeps the background, projects the kinematic info on the mask, and stores the kinematic info for that frame into the time series .csv for the hand + body + face.