Learning NeRFs for Talking Face Videos
In this project, we learn neural implicit representations for talking human faces. We model the 4D face geometry and appearance using neural radiance fields (NeRFs). Our goal is to synthesize high-quality talking face videos. We focus on two main applications: (a) lip synchronization, where the synthesized face follows a target audio, and (b) facial expression transfer, where the synthesized face follows target expressions.
Publications
-
LipNeRF: What is the right feature space to lip-sync a NeRF?
Aggelina Chatziagapi, ShahRukh Athar, Abhinav Jain, Rohith MV, Vimal Bhat, Dimitris Samaras
International Conference on Automatic Face and Gesture Recognition (FG), 2023 -
MI-NeRF: Learning a Single Face NeRF from Multiple Identities
Aggelina Chatziagapi, Grigorios G. Chrysos, Dimitris Samaras
arXiv, 2024