Python script based on OpenCV Lucas Kanade Optical Flow and Dlib's pre-trained Kazemi facial landmarks detector will capture 68 points of user's face through a webcam without markers.
These 68 points are stabilized with Kalman Filter and sent through socket to Unity3D.
In Unity3D, points and 'Metrics' are bound. A Metric is for example the mouth opening, the mouth width or eyebrows rotation.
Then, N* Blend Shapes are bound with Metrics. For example a 'Jaw Open' Blend Shape could be bound to 'Mouth Opening' and 'Mouth Width' Metrics.
In a first phase, in Unity the user train these Blendshapes, recording the differences of Metrics between a base neutral pose and the pose that fit the Blend Shape.
In a second phase the User can capture the actual live sequence. Points will be translated into Metrics and, by a curve-weighted proportion between the base and the target Blend Shape, it will generate a 0-1 value for each Blend Shape.
During the recording Blend Shapes values are stored for each keyframes.
During the playback Blend Shapes values are interpolated and used to fill Blend Shapes weight of a Unity's SkinnedMeshRenderer (or UE4's Morph Targets).
*This demo use some of 50+ blendshapes based on Faceshift/Apple FaceAR list.
Converting Face captured points to Blend Shapes values is just the base feature.
Other supported features include:
Head transform sync based on facial landmarks points projections.
Eye tracking based on landmarks, minmaxLoc, optical flow, hough circles algorithm and Kalman filter.
Wrinkle maps blending based on Blend Shapes values.