TOP STORIES

Grid List

Discovery Communications India today said it has roped in Anil-Ambani-led Reliance group firm, Reliance Animation to produce a new animation series, Little Singham.

Assemblage Entertainment, the feature-film focused CGI animation studio has begun production on the sequel of 2016 theatrical feature film Norm of the North, along with Splash Entertainment, Lionsgate and Dream Factory.

Framestore shows its cross-platform capabilities yet again, bringing Marvel character Hulk to life on both film and commercial platforms.

At SIGGRAPH 2017, NVIDIA is showcasing research that makes it far easier to animate realistic human faces, simulate how light interacts with surfaces in a scene and render realistic images more quickly.

Moving Pictures brings to us the VFX breakdown behind the process of replicating Rachael for Blade Runner 2049. The team lead by VFX Supervisor Richard Clegg worked closely with Director Denis Villeneuve and Production VFX Supervisor John Nelson.

MPC’s VFX team lead by VFX Supervisor Ferran Domenech worked alongside Director Ridley Scott and Production VFX Supervisor Charley Henley to create more than 700 stunning shots for the Alien: Covenant. As lead studio, MPC’s work included the creation of the movies terrifying creatures, alien environments, vehicles and complex FX simulation work.

Support us by Liking us

Realtime Facial Tracking and Animation

Typography

which doesn't require calibration for different individuals and seems suitable for deployment in consumer-level applications. See the video to appreciate how good it is at getting an avatar to follow your facial expressions.

 This research comes from the Graphics and Parallel Systems Lab of Zhejiang University, China. What is impressive about the demo is that, rather than an RGBD camera such as the Kinect, it employs a single ”normal" video camera (webcam) that are widely available on PCs and mobile devices.

You need to see the video of it in action to appreciate how good it is and how it could be used to implement avatars, virtual reality, telepresence and so on... 

To quote from the paper Displaced Dynamic Expression Regression for Real-time Facial Tracking and Animation authored by Chen Cao, Qiming Hou and Kun Zhou, the automatic approach employed:

learns a generic regressor from public image datasets, which can be applied to any user and arbitrary video cameras to infer accurate 2D facial landmarks as well as the 3D facial shape from 2D video frames, assuming the user identity does not change across frames. The inferred 2D landmarks are then used to adapt the camera matrix and the user identity to better match the facial expressions of the current user. The regression and adaptation are performed in an alternating manner, effectively creating a feedback loop. With more and more facial expressions observed in the video, the whole process converges quickly with accurate facial tracking and animation.  Realtime Facial Tracking and Animation

As indicated in the workflow diagram above, the process uses a regression-based algorithm with the DDE (Displaced Dynamic Expression) model which simultaneously represents the 3D shape of the user’s facial expressions and the 2D facial landmarks which correspond to semantic facial features in video frames. The DEM (Dynamic Expression Model) adaptation step corrects the camera matrix for the current users, thus eliminating the need for calibration.

This email address is being protected from spambots. You need JavaScript enabled to view it.